Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 4 |
| Since 2007 (last 20 years) | 9 |
Descriptor
| Multiple Choice Tests | 120 |
| Scoring Formulas | 120 |
| Guessing (Tests) | 69 |
| Test Reliability | 47 |
| Response Style (Tests) | 33 |
| Test Validity | 32 |
| Test Items | 26 |
| Higher Education | 25 |
| Scoring | 23 |
| Confidence Testing | 22 |
| Test Interpretation | 20 |
| More ▼ | |
Source
Author
| Frary, Robert B. | 10 |
| Cross, Lawrence H. | 4 |
| Echternacht, Gary | 4 |
| Plake, Barbara S. | 4 |
| Boldt, Robert F. | 3 |
| Jacobs, Stanley S. | 3 |
| Weiss, David J. | 3 |
| Wilcox, Rand R. | 3 |
| Bliss, Leonard B. | 2 |
| Donlon, Thomas F. | 2 |
| Drasgow, Fritz | 2 |
| More ▼ | |
Publication Type
Education Level
| Higher Education | 4 |
| Postsecondary Education | 2 |
| High Schools | 1 |
Audience
| Researchers | 4 |
Laws, Policies, & Programs
Assessments and Surveys
| Armed Services Vocational… | 2 |
| Iowa Tests of Basic Skills | 2 |
| Childrens Manifest Anxiety… | 1 |
| SAT (College Admission Test) | 1 |
| State Trait Anxiety Inventory | 1 |
What Works Clearinghouse Rating
Atkinson, George F.; Doadt, Edward – Assessment in Higher Education, 1980
Some perceived difficulties with conventional multiple choice tests are mentioned, and a modified form of examination is proposed. It uses a computer program to award partial marks for partially correct answers, full marks for correct answers, and check for widespread misunderstanding of an item or subject. (MSE)
Descriptors: Achievement Tests, Computer Assisted Testing, Higher Education, Multiple Choice Tests
Frary, Robert B.; And Others – 1985
Students in an introductory college course (n=275) responded to equivalent 20-item halves of a test under number-right and formula-scoring instructions. Formula scores of those who omitted items overaged about one point lower than their comparable (formula adjusted) scores on the test half administered under number-right instructions. In contrast,…
Descriptors: Guessing (Tests), Higher Education, Multiple Choice Tests, Questionnaires
Peer reviewedBaskin, David – Journal of Educational Measurement, 1975
Traditional test scoring does not allow the examination of differences among subjects obtaining identical raw scores on the same test. A configuration scoring paradigm for identical raw scores, which provides for such comparisons, is developed and illustrated. (Author)
Descriptors: Elementary Secondary Education, Individual Differences, Mathematical Models, Multiple Choice Tests
Boldt, Robert F. – 1974
One formulation of confidence scoring requires the examinee to indicate as a number his personal probability of the correctness of each alternative in a multiple-choice test. For this formulation a linear transformation of the logarithm of the correct response is maximized if the examinee accurately reports his personal probability. To equate…
Descriptors: Confidence Testing, Guessing (Tests), Multiple Choice Tests, Probability
Jacobs, Stanley S. – 1974
Investigated were the effects of two levels of penalty for incorrect responses on two dependent variables (a measure of risk-taking or confidence, based on nonsense items, and the number of response-attempts to legitimate items) for three treatment groups in a 2x3, multi-response repeated measures, multivariate ANOVA (Analysis of Variance) design.…
Descriptors: Confidence Testing, Criterion Referenced Tests, Guessing (Tests), Multiple Choice Tests
Peer reviewedKane, Michael; Moloney, James – Applied Psychological Measurement, 1978
The answer-until-correct (AUC) procedure requires that examinees respond to a multi-choice item until they answer it correctly. Using a modified version of Horst's model for examinee behavior, this paper compares the effect of guessing on item reliability for the AUC procedure and the zero-one scoring procedure. (Author/CTM)
Descriptors: Guessing (Tests), Item Analysis, Mathematical Models, Multiple Choice Tests
Peer reviewedEakin, Richard R.; Long, Clifford A. – Educational and Psychological Measurement, 1977
A scoring technique for true-false tests is presented. The technique, paired item scoring, involves combining two statements and having the student select one of the four resultants possible: true-true, false-true, true-false, and false-false. The combined item is treated as a multiple choice item. (Author/JKS)
Descriptors: Guessing (Tests), Measurement Techniques, Multiple Choice Tests, Objective Tests
Peer reviewedGross, Leon J. – Evaluation and the Health Professions, 1982
Despite the 50 percent probability of a correctly guessed response, a multiple true-false examination should provide sufficient score variability for adequate discrimination without formula scoring. This scoring system directs examinees to respond to each item, with their scores based simply on the number of correct responses. (Author/CM)
Descriptors: Achievement Tests, Guessing (Tests), Health Education, Higher Education
Tollefson, Nona; Chung, Jing-Mei – 1986
Procedures for correcting for guessing and for assessing partial knowledge (correction-for-guessing, three-decision scoring, elimination/inclusion scoring, and confidence or probabilistic scoring) are discussed. Mean scores and internal consistency reliability estimates were compared across three administration and scoring procedures for…
Descriptors: Achievement Tests, Comparative Analysis, Evaluation Methods, Graduate Students
Hutchinson, T. P. – 1984
One means of learning about the processes operating in a multiple choice test is to include some test items, called nonsense items, which have no correct answer. This paper compares two versions of a mathematical model of test performance to interpret test data that includes both genuine and nonsense items. One formula is based on the usual…
Descriptors: Foreign Countries, Guessing (Tests), Mathematical Models, Multiple Choice Tests
Bliss, Leonard B. – 1981
The aim of this study was to show that the superiority of corrected-for-guessing scores over number right scores as true score estimates depends on the ability of examinees to recognize situations where they can eliminate one or more alternatives as incorrect and to omit items where they would only be guessing randomly. Previous investigations…
Descriptors: Algorithms, Guessing (Tests), Intermediate Grades, Multiple Choice Tests
Peer reviewedEchternacht, Gary – Educational and Psychological Measurement, 1976
Compares various item option scoring methods with respect to coefficient alpha and a concurrent validity coefficient. Scoring methods compared were: formula scoring, a priori scoring, empirical scoring with an internal criterion, and two modifications of formula scoring. The empirically determined scoring system is seen as superior. (RC)
Descriptors: Aptitude Tests, Multiple Choice Tests, Response Style (Tests), Scoring Formulas
Peer reviewedBradbard, David A.; Green, Samuel B. – Journal of Experimental Education, 1986
The effectiveness of the Coombs elimination procedure was evaluated with 29 college students enrolled in a statistics course. Five multiple-choice tests were employed and scored using the Coombs procedure. Results suggest that the Coombs procedure decreased guessing, and this effect increased over the grading period. (Author/LMO)
Descriptors: Analysis of Variance, College Students, Guessing (Tests), Higher Education
Peer reviewedTraub, Ross E.; Hambleton, Ronald K. – Educational and Psychological Measurement, 1973
Descriptors: Grade 8, Guessing (Tests), Multiple Choice Tests, Pacing
Peer reviewedJacobs, Stanley S. – Journal of Educational Measurement, 1971
Descriptors: Guessing (Tests), Individual Differences, Measurement Techniques, Multiple Choice Tests


