Publication Date
| In 2026 | 0 |
| Since 2025 | 85 |
| Since 2022 (last 5 years) | 453 |
| Since 2017 (last 10 years) | 1241 |
| Since 2007 (last 20 years) | 2515 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 122 |
| Teachers | 105 |
| Researchers | 64 |
| Students | 46 |
| Administrators | 14 |
| Policymakers | 7 |
| Counselors | 3 |
| Parents | 3 |
Location
| Canada | 134 |
| Turkey | 131 |
| Australia | 123 |
| Iran | 66 |
| Indonesia | 61 |
| United Kingdom | 51 |
| Germany | 50 |
| Taiwan | 46 |
| United States | 43 |
| China | 39 |
| California | 35 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 3 |
| Meets WWC Standards with or without Reservations | 5 |
| Does not meet standards | 6 |
Peer reviewedDonlon, Thomas F. – Journal of Educational Measurement, 1981
Scores within the chance range are differentiated, "uninterpretable" scores being those that demonstrate randomness (broadly defined) by failing to achieve typical levels of correlation with group-determined difficulty. The relevant literature is reviewed. Finally, randomness and uninterpretability are examined in light of the…
Descriptors: Difficulty Level, Guessing (Tests), Multiple Choice Tests, Scores
Noble, John W. – Phi Delta Kappan, 1980
Describes a program in which a computer is used to store test items and to construct tests for algebra and geometry classes. (IRT)
Descriptors: Computers, Mathematics Instruction, Multiple Choice Tests, Secondary Education
Peer reviewedVidler, Derek; Hansen, Richard – Journal of Experimental Education, 1980
Relationships among patterns of answer changing and item characteristics on multiple-choice tests are discussed. Results obtained were similar to those found in previous studies but pointed to further relationships among these variables. (Author/GK)
Descriptors: College Students, Difficulty Level, Higher Education, Multiple Choice Tests
Peer reviewedHamdan, M. A. – Journal of Experimental Education, 1979
The distribution theory underlying corrections for guessing is analyzed, and the probability distributions of the random variables are derived. The correction in grade, based on random guessing of unknown answers, is compared with corrections based on educated guessing. (Author/MH)
Descriptors: Guessing (Tests), Maximum Likelihood Statistics, Multiple Choice Tests, Probability
Peer reviewedHouston, John P. – Journal of Educational Psychology, 1976
Two experiments investigated the amount and loci of multiple-choice examination answer copying as related to the spacing of students and the use of alternate test forms. (RC)
Descriptors: Cheating, Classroom Design, College Students, Higher Education
Peer reviewedHansen, James D.; Dexter, Lee – Journal of Education for Business, 1997
Analysis of test item banks in 10 auditing textbooks found that 75% of questions violated one or more guidelines for multiple-choice items. In comparison, 70% of a certified public accounting exam bank had no violations. (SK)
Descriptors: Accounting, Guidelines, Item Banks, Multiple Choice Tests
Peer reviewedHaynie, W. J., III – Journal of Industrial Teacher Education, 2003
Undergraduate metals technology students (n=52) took a multiple-choice test, 49 a matching test, and 47 no test. All took an unannounced delayed retention test 3 weeks later. Initial testing aided retention, with no differences between matching and multiple-choice groups. The matching group had a small but significant score increase on new…
Descriptors: Higher Education, Metals, Multiple Choice Tests, Objective Tests
Peer reviewedClute, Ronald C.; McGrail, George R. – Journal of Education for Business, 1989
Eight text banks that accompany cost accounting textbooks were evaluated for the presence of bias in the distribution of correct responses. All but one were found to have considerable bias, and three of eight were found to have significant choice bias. (SK)
Descriptors: Accounting, Higher Education, Item Banks, Multiple Choice Tests
Peer reviewedFrary, Robert B. – Journal of Educational Measurement, 1989
Responses to a 50-item, 4-choice test were simulated for 1,000 examinees under conventional formula-scoring instructions. Based on 192 simulation runs, formula scores and expected formula scores were determined for each examinee allowing and not allowing for inappropriate omissions. (TJH)
Descriptors: Computer Simulation, Difficulty Level, Guessing (Tests), Multiple Choice Tests
Peer reviewedHarris, Diana K.; Changas, Paul S. – Educational Gerontology, 1994
Palmore's Facts on Aging Quiz in standard true-false format was completed by 180 sociology students, a revised multiple-choice version by 195 students. The multiple-choice test decreased guessing, had greater internal consistency and reliability, tended to have higher discriminatory power, and identified more specifically misconceptions about…
Descriptors: Aging (Individuals), Comparative Testing, Multiple Choice Tests, Objective Tests
Buchanan, Jeff – Guidelines, 1992
Guidelines that language teachers need to write valid multiple choice items/questions for vocabulary tests and exams are presented. A sample test is accompanied by an analysis of possible answers to each question. (LB)
Descriptors: Foreign Countries, Guidelines, Language Tests, Multiple Choice Tests
Peer reviewedBradlow, Eric T.; Wainer, Howard; Wang, Xiaohui – Psychometrika, 1999
Proposes a parametric approach that involves a modification of standard Item Response Theory models that explicitly accounts for the nesting of items within the same testlets and that can be applied to multiple-choice sections comprising a mixture of independent items and testlets. (Author/SLD)
Descriptors: Bayesian Statistics, Item Response Theory, Models, Multiple Choice Tests
Peer reviewedSegall, Daniel O. – Psychometrika, 2001
Proposed and evaluated two new methods of improving the measurement precision of a general test factor. One provides a multidimensional item response theory estimate based on administrations of multiple-choice test items that span general and nuisance dimensions, and the other chooses items adaptively to maximize the precision of the general…
Descriptors: Ability, Adaptive Testing, Item Response Theory, Measurement Techniques
Peer reviewedTaylor, Charles; Gardner, Paul L. – Research in Science Education, 1999
Discusses a simple modification to the method of answering and scoring multiple choice tests that allows students to indicate their estimates of the probability of the correctness of the multiple choice options for each question without affecting the validity of the assessment. (Author/CCM)
Descriptors: Evaluation, Multiple Choice Tests, Physics, Science Education
Kim, Seonghoon; Lee, Won-Chan – Journal of Educational Measurement, 2006
Under item response theory (IRT), linking proficiency scales from separate calibrations of multiple forms of a test to achieve a common scale is required in many applications. Four IRT linking methods including the mean/mean, mean/sigma, Haebara, and Stocking-Lord methods have been presented for use with single-format tests. This study extends the…
Descriptors: Simulation, Item Response Theory, Test Format, Measures (Individuals)

Direct link
