Descriptor
| Multiple Choice Tests | 9 |
| Tables (Data) | 9 |
| Test Reliability | 9 |
| Test Validity | 4 |
| Guessing (Tests) | 3 |
| Scoring | 3 |
| Scoring Formulas | 3 |
| Weighted Scores | 3 |
| Correlation | 2 |
| Data Analysis | 2 |
| High School Students | 2 |
| More ▼ | |
Author
Publication Type
| Numerical/Quantitative Data | 1 |
| Reports - Evaluative | 1 |
Education Level
Audience
Location
| Africa | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Peer reviewedKrauft, Conrad C.; Beggs, Donald L. – Journal of Experimental Education, 1973
The purpose of the study was to determine whether a subject weighted (SW) multiple-choice test taking procedure would result in higher and more reliable scores than the conventional (C) multiple-choice test taking procedure in general at different levels of risk taking. (Author)
Descriptors: Attitudes, Educational Research, Multiple Choice Tests, Questionnaires
Peer reviewedOosterhof, Albert C.; Glasnapp, Douglas R. – Journal of Experimental Education, 1974
The present study was initiated to investigate the comparability of multiple-choice and true-false item formats when the time necessary to respond to each type of item was equated empirically. (Editor)
Descriptors: Data Analysis, Guessing (Tests), Multiple Choice Tests, Objective Tests
Peer reviewedFrary, Robert B.; And Others – Journal of Experimental Education, 1977
To date a theoretical basis has not been developed for determining changes in reliability when score points from random guessing are eliminated and those from non-randon guessing are retained. This paper presents a derivation of an expression for the reliability coefficient which displays the effect of deleting score components due to random…
Descriptors: Data Analysis, Guessing (Tests), Multiple Choice Tests, Scoring Formulas
Peer reviewedFrisbee, David A. – Journal of Educational Measurement, 1973
The purpose of this study was to gather empirical evidence to compare the reliabilities and concurrent validities of multiple choice and true-false tests that were written to measure understandings and relationships in the same content areas. (Author)
Descriptors: Achievement Tests, Correlation, High School Students, Measurement
Peer reviewedAkeju, S. A. – Journal of Educational Measurement, 1972
Study was an attempt to evaluate the West African Examinations Council efforts in terms of the extent to which its marking procedures have ensured high reader reliability for the English Language Essay examination, a test which was designed to measure writing ability. (Author)
Descriptors: Essay Tests, Examiners, Foreign Countries, Multiple Choice Tests
Peer reviewedReilly, Richard R.; Jackson, Rex – Journal of Educational Measurement, 1973
The present study suggests that although the reliability of an academic aptitude test given under formula-score condition can be increased substantially through empirical option weighting, much of the increase is due to the capitalization of the keying procedure on omitting tendencies which are reliable but not valid. (Author)
Descriptors: Aptitude Tests, Correlation, Factor Analysis, Item Sampling
Peer reviewedTraub, Ross E.; Hambleton, Ronald K. – Educational and Psychological Measurement, 1972
Findings of this study suggest that it is preferable to attempt to control guessing through the use of the reward instruction rather than to attempt to control it using the penalty instruction or to encourage it using the insttruction to guess. (Authors/MB)
Descriptors: Grade 8, Guessing (Tests), Multiple Choice Tests, Pacing
Pollock, Judith M.; And Others – 1997
This report describes an experiment in constructed response testing undertaken in conjunction with the National Education Longitudinal Study of 1988 (NELS:88). Constructed response questions are those that require students to produce their own response rather than selecting the correct answer from several options. Participants in this experiment…
Descriptors: Constructed Response, Costs, High School Students, High Schools
Echternacht, Gary – 1973
This study compares various item option scoring methods with respect to coefficient alpha and a concurrent validity coefficient. The scoring methods under consideration were: (1) formula scoring, (2) a priori scoring, (3) empirical scoring with an internal criterion, and (4) two modifications of formula scoring. The study indicates a clear…
Descriptors: Item Analysis, Measurement Techniques, Multiple Choice Tests, Performance Criteria


