Publication Date
| In 2026 | 0 |
| Since 2025 | 10 |
| Since 2022 (last 5 years) | 40 |
| Since 2017 (last 10 years) | 118 |
| Since 2007 (last 20 years) | 211 |
Descriptor
| Multiple Choice Tests | 532 |
| Test Reliability | 532 |
| Test Validity | 302 |
| Test Construction | 238 |
| Test Items | 172 |
| Foreign Countries | 114 |
| Item Analysis | 101 |
| Higher Education | 90 |
| Difficulty Level | 85 |
| Guessing (Tests) | 74 |
| Scoring | 69 |
| More ▼ | |
Source
Author
| Ebel, Robert L. | 10 |
| Frary, Robert B. | 9 |
| Alonzo, Julie | 7 |
| Frisbie, David A. | 6 |
| Irvin, P. Shawn | 6 |
| Lai, Cheng-Fei | 6 |
| Park, Bitnara Jasmine | 6 |
| Tindal, Gerald | 6 |
| Wilcox, Rand R. | 5 |
| Albanese, Mark A. | 4 |
| Biancarosa, Gina | 4 |
| More ▼ | |
Publication Type
Education Level
Audience
| Researchers | 11 |
| Practitioners | 8 |
| Teachers | 5 |
Location
| Indonesia | 17 |
| Turkey | 17 |
| Germany | 8 |
| Iran | 8 |
| Canada | 6 |
| Malaysia | 4 |
| Nigeria | 4 |
| Australia | 3 |
| Florida | 3 |
| Japan | 3 |
| Pakistan | 3 |
| More ▼ | |
Laws, Policies, & Programs
| No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Green, Kathy; Sax, Gilbert – 1981
Achievement test reliability as a function of ability was determined for multiple sections of a large university French class (n=193). A 5-option multiple-choice examination was constructed, least attractive distractors were eliminated based on the instructor's judgment, and the resulting three forms of the examination (i.e. 3-, 4-, or 5-choice…
Descriptors: Academic Ability, Achievement Tests, French, Higher Education
Ebel, Robert L. – 1971
The suggestion that multiple-choice items can be converted to true-false items without essentially changing what the item measures and with possible improvement in efficiency is investigated. Each of the 90 four-choice items in a natural science test was rewritten into a pair of true-false items, one true, one false. The resulting 180 items were…
Descriptors: Achievement Tests, Comparative Analysis, Item Analysis, Multiple Choice Tests
Oosterhof, Albert C.; Glasnapp, Douglas R. – 1972
The present study was concerned with several currently unanswered questions, two of which are: what is an empirically determined ratio of multiple choice to equivalent true-false items which can be answered in a given amount of time?; and for achievement test items administered within a classroom situation, which of the two formats under…
Descriptors: Comparative Analysis, Guessing (Tests), Multiple Choice Tests, Objective Tests
Peer reviewedCross, Lawrence H.; Frary, Robert B. – Educational and Psychological Measurement, 1978
The reliability and validity of multiple choice test scores resutling from empirical choice-weighting of alternatives was examined under two conditions: (1) examinees were told not to guess unless choices could be eliminated; and (2) examinees were told the total score would be the total number correct. Results favored the choice-weighting…
Descriptors: Guessing (Tests), Higher Education, Multiple Choice Tests, Response Style (Tests)
Peer reviewedKane, Michael; Moloney, James – Applied Psychological Measurement, 1978
The answer-until-correct (AUC) procedure requires that examinees respond to a multi-choice item until they answer it correctly. Using a modified version of Horst's model for examinee behavior, this paper compares the effect of guessing on item reliability for the AUC procedure and the zero-one scoring procedure. (Author/CTM)
Descriptors: Guessing (Tests), Item Analysis, Mathematical Models, Multiple Choice Tests
Peer reviewedRemer, Rory – Journal of Educational Measurement, 1978
The relative efficiency and cost-effectiveness of three methods of producing and administering a worksample simulation test of interpersonal communication competence employing a multiple choice response format is explored. (Author/JKS)
Descriptors: Communication Skills, Cost Effectiveness, Higher Education, Interpersonal Competence
Peer reviewedEakin, Richard R.; Long, Clifford A. – Educational and Psychological Measurement, 1977
A scoring technique for true-false tests is presented. The technique, paired item scoring, involves combining two statements and having the student select one of the four resultants possible: true-true, false-true, true-false, and false-false. The combined item is treated as a multiple choice item. (Author/JKS)
Descriptors: Guessing (Tests), Measurement Techniques, Multiple Choice Tests, Objective Tests
Peer reviewedHodson, D. – Research in Science and Technological Education, 1984
Investigated the effect on student performance of changes in question structure and sequence on a GCE 0-level multiple-choice chemistry test. One finding noted is that there was virtually no change in test reliability on reducing the number of options (from five to per test item). (JN)
Descriptors: Academic Achievement, Chemistry, Multiple Choice Tests, Science Education
McCowan, Richard J. – Online Submission, 1999
Item writing is a major responsibility of trainers. Too often, qualified staff who prepare lessons carefully and teach conscientiously use inadequate tests that do not validly reflect the true level of trainee achievement. This monograph describes techniques for constructing multiple-choice items that measure student performance accurately. It…
Descriptors: Multiple Choice Tests, Item Analysis, Test Construction, Test Items
Peer reviewedFrisbee, David A. – Journal of Educational Measurement, 1973
The purpose of this study was to gather empirical evidence to compare the reliabilities and concurrent validities of multiple choice and true-false tests that were written to measure understandings and relationships in the same content areas. (Author)
Descriptors: Achievement Tests, Correlation, High School Students, Measurement
Peer reviewedAkeju, S. A. – Journal of Educational Measurement, 1972
Study was an attempt to evaluate the West African Examinations Council efforts in terms of the extent to which its marking procedures have ensured high reader reliability for the English Language Essay examination, a test which was designed to measure writing ability. (Author)
Descriptors: Essay Tests, Examiners, Foreign Countries, Multiple Choice Tests
Wilbur, Paul H. – J Educ Meas, 1970
Descriptors: High School Students, Multiple Choice Tests, Patterned Responses, Responses
Peer reviewedWeiten, Wayne – Journal of Experimental Education, 1982
A comparison of double as opposed to single multiple-choice questions yielded significant differences in regard to item difficulty, item discrimination, and internal reliability, but not concurrent validity. (Author/PN)
Descriptors: Difficulty Level, Educational Testing, Higher Education, Multiple Choice Tests
Peer reviewedKolstad, Rosemarie; And Others – Journal of Dental Education, 1982
Nonrestricted-answer, multiple-choice test items are recommended as a way of including more facts and fewer incorrect answers in test items, and they do not cue successful guessing as restricted multiple choice items can. Examination construction, scoring, and reliability are discussed. (MSE)
Descriptors: Guessing (Tests), Higher Education, Item Analysis, Multiple Choice Tests
Peer reviewedGreen, Kathy; And Others – Educational and Psychological Measurement, 1982
Achievement test reliability and validity as a function of ability were determined for multiple sections of a large undergraduate French class. Results did not support previous arguments that decreasing the number of options results in a more efficient test for high-level examinees, but less efficient for low-level examinees. (Author/GK)
Descriptors: Academic Ability, Comparative Analysis, Higher Education, Multiple Choice Tests


