NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing 211 to 225 of 532 results Save | Export
Peer reviewed Peer reviewed
Gibbons, Jean D.; And Others – Psychometrika, 1979
On a multiple-choice test in which each item has k alternative responses, the test taker is permitted to choose any subset which he believes contains the one correct answer. A scoring system is devised. (Author/CTM)
Descriptors: Confidence Testing, Efficiency, Multiple Choice Tests, Scoring
Peer reviewed Peer reviewed
Nishisato, Shizuhiko; Sheu, Wen-Jenn – Psychometrika, 1980
A modification of the method of reciprocal averages for scaling multiple choice data is proposed. The proposed method handles the data in a piecewise fashion and allows for faster convergence to a solution. (Author/JKS)
Descriptors: Item Analysis, Measurement Techniques, Multiple Choice Tests, Test Reliability
Trieber, J. Marshall – Training and Development Journal, 1980
Aims to help instructors make more valid test questions, particularly multiple-choice ones. Emphasis is placed on multiple-choice questions to show the wealth of opportunities they offer for testing because of their uses, objectivity, and ease of grading. Discusses test scheduling, construction, and evaluation and follow-up. (CT)
Descriptors: Multiple Choice Tests, Test Construction, Test Reliability, Test Validity
Peer reviewed Peer reviewed
Holmes, Roy A.; And Others – Educational and Psychological Measurement, 1974
Descriptors: Chemistry, Multiple Choice Tests, Scoring Formulas, Test Reliability
Strasler, Gregg M.; Raeth, Peter G. – 1977
The study investigated the feasibility of adapting the coefficient k introduced by Cohen (1960) and elaborated by Swaminathan, Hambleton, and Algina (1974) to an internal consistency estimate for criterion referenced tests in single test administrations. The authors proposed the use of k as an internal consistency estimate by logically dividing…
Descriptors: Computer Programs, Criterion Referenced Tests, Multiple Choice Tests, Test Reliability
Peer reviewed Peer reviewed
Gilman, David Alan; Ferry, Paula – Journal of Educational Measurement, 1972
Results indicate that scoring tests by the self-scoring method can result in a higher split half reliability than tests scored by the traditional right-wrong method. (Authors)
Descriptors: Data Analysis, Multiple Choice Tests, Scoring, Test Construction
Sabers, Darrell L.; White, Gordon W. – J Educ Meas, 1969
Descriptors: Aptitude Tests, Multiple Choice Tests, Predictive Validity, Scoring
Ebel, Robert L. – Educ Psychol Meas, 1969
Descriptors: Item Analysis, Multiple Choice Tests, Objective Tests, Test Reliability
Rich, Charles E.; Johanson, George A. – 1990
Despite the existence of little empirical evidence for their effectiveness, many techniques have been suggested for writing multiple-choice items. The option "none of the above" (NA) has been widely used although a recent review of empirical studies of NA suggests that, while generally decreasing the difficulty index, NA also decreases…
Descriptors: Difficulty Level, Item Analysis, Multiple Choice Tests, Test Construction
Peer reviewed Peer reviewed
Hanna, Gerald S. – Journal of Educational Measurement, 1975
An alternative to the conventional right-wrong scoring method used on multiple-choice tests was presented. In the experiment, the examinee continued to respond to a multiple-choice item until feedback signified a correct answer. Findings showed that experimental scores were more reliable but less valid than inferred conventional scores.…
Descriptors: Feedback, Higher Education, Multiple Choice Tests, Scoring
Peer reviewed Peer reviewed
Raffeld, Paul – Journal of Educational Measurement, 1975
Results support the contention that a Guttman-weighted objective test can have psychometric properties that are superior to those of its unweighted counterpart, as long as omissions do not exist or are assigned a value equal to the mean of the k item alternative weights. (Author/BJG)
Descriptors: Multiple Choice Tests, Predictive Validity, Test Reliability, Test Validity
Peer reviewed Peer reviewed
Crehan, Kevin D.; Slakter, Malcolm J. – Psychological Reports, 1971
Descriptors: Comparative Analysis, Multiple Choice Tests, Test Construction, Test Reliability
Hambleton, Ronald K.; And Others – J Educ Meas, 1970
Descriptors: Comparative Analysis, Evaluation Methods, Multiple Choice Tests, Test Reliability
Peer reviewed Peer reviewed
Wilcox, Rand R. – Educational and Psychological Measurement, 1982
Results in the engineering literature on "k out of n system reliability" can be used to characterize tests based on estimates of the probability of correctly determining whether the examinee knows the correct response. In particular, the minimum number of distractors required for multiple-choice tests can be empirically determined.…
Descriptors: Achievement Tests, Mathematical Models, Multiple Choice Tests, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal – ETS Research Report Series, 2004
Contrary to common belief, reliability estimates of number-right multiple-choice tests are not inflated by speededness. Because examinees guess on questions when they run out of time, the responses to these questions show less consistency with the responses of other questions, and the reliability of the test will be decreased. The surprising…
Descriptors: Multiple Choice Tests, Timed Tests, Test Reliability, Guessing (Tests)
Pages: 1  |  ...  |  11  |  12  |  13  |  14  |  15  |  16  |  17  |  18  |  19  |  ...  |  36