Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 0 |
| Since 2007 (last 20 years) | 1 |
Descriptor
| Multiple Choice Tests | 7 |
| Scoring Formulas | 7 |
| Test Items | 5 |
| Guessing (Tests) | 3 |
| Objective Tests | 2 |
| Scores | 2 |
| Test Interpretation | 2 |
| Test Theory | 2 |
| Achievement Tests | 1 |
| Educational Practices | 1 |
| Error Correction | 1 |
| More ▼ | |
Source
| Applied Measurement in… | 1 |
| Applied Psychological… | 1 |
| Assessment & Evaluation in… | 1 |
| Educational and Psychological… | 1 |
| Evaluation and the Health… | 1 |
| Teaching Mathematics and Its… | 1 |
Author
| Frary, Robert B. | 2 |
| Budescu, David V. | 1 |
| Burton, Richard F. | 1 |
| Drasgow, Fritz | 1 |
| Gross, Leon J. | 1 |
| Hutchinson, T.P. | 1 |
| Van Hecke, Tanja | 1 |
Publication Type
| Reports - Evaluative | 7 |
| Journal Articles | 6 |
| Reports - Research | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
| Armed Services Vocational… | 1 |
What Works Clearinghouse Rating
Van Hecke, Tanja – Teaching Mathematics and Its Applications, 2015
Optimal assessment tools should measure in a limited time the knowledge of students in a correct and unbiased way. A method for automating the scoring is multiple choice scoring. This article compares scoring methods from a probabilistic point of view by modelling the probability to pass: the number right scoring, the initial correction (IC) and…
Descriptors: Multiple Choice Tests, Error Correction, Grading, Evaluation Methods
Peer reviewedFrary, Robert B. – Applied Measurement in Education, 1989
Multiple-choice response and scoring methods that attempt to determine an examinee's degree of knowledge about each item in order to produce a total test score are reviewed. There is apparently little advantage to such schemes; however, they may have secondary benefits such as providing feedback to enhance learning. (SLD)
Descriptors: Knowledge Level, Multiple Choice Tests, Scoring, Scoring Formulas
Budescu, David V. – 1979
This paper outlines a technique for differentially weighting options of a multiple choice test in a fashion that maximizes the item predictive validity. The rule can be applied with different number of categories and the "optimal" number of categories can be determined by significance tests and/or through the R2 criterion. Our theoretical analysis…
Descriptors: Multiple Choice Tests, Predictive Validity, Scoring Formulas, Test Items
Peer reviewedDrasgow, Fritz; And Others – Applied Psychological Measurement, 1989
Multilinear formula scoring (MFS) is reviewed, with emphasis on estimating option characteristic curves (OCSs). MFS was used to estimate OCSs for the arithmetic reasoning subtest of the Armed Services Vocational Aptitude Battery for 2,978 examinees. A second analysis obtained OCSs for simulated data. The use of MFS is discussed. (SLD)
Descriptors: Estimation (Mathematics), Mathematical Models, Multiple Choice Tests, Scores
Peer reviewedGross, Leon J. – Evaluation and the Health Professions, 1982
Despite the 50 percent probability of a correctly guessed response, a multiple true-false examination should provide sufficient score variability for adequate discrimination without formula scoring. This scoring system directs examinees to respond to each item, with their scores based simply on the number of correct responses. (Author/CM)
Descriptors: Achievement Tests, Guessing (Tests), Health Education, Higher Education
Willingness to Answer Multiple-Choice Questions as Manifested Both in Genuine and in Nonsense Items.
Peer reviewedFrary, Robert B.; Hutchinson, T.P. – Educational and Psychological Measurement, 1982
Alternate versions of Hutchinson's theory were compared, and one which implies the existence of partial knowledge was found to be better than one which implies that an appropriate measure of ability is obtained by applying the conventional correction for guessing. (Author/PN)
Descriptors: Guessing (Tests), Latent Trait Theory, Multiple Choice Tests, Scoring Formulas
Multiple Choice and True/False Tests: Reliability Measures and Some Implications of Negative Marking
Burton, Richard F. – Assessment & Evaluation in Higher Education, 2004
The standard error of measurement usefully provides confidence limits for scores in a given test, but is it possible to quantify the reliability of a test with just a single number that allows comparison of tests of different format? Reliability coefficients do not do this, being dependent on the spread of examinee attainment. Better in this…
Descriptors: Multiple Choice Tests, Error of Measurement, Test Reliability, Test Items

Direct link
