Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 2 |
| Since 2007 (last 20 years) | 2 |
Descriptor
| Item Analysis | 19 |
| Multiple Choice Tests | 19 |
| Scoring Formulas | 19 |
| Test Reliability | 9 |
| Guessing (Tests) | 8 |
| Scoring | 6 |
| Test Items | 6 |
| Weighted Scores | 6 |
| Confidence Testing | 5 |
| Measurement Techniques | 5 |
| Response Style (Tests) | 5 |
| More ▼ | |
Source
| Advances in Health Sciences… | 1 |
| Applied Psychological… | 1 |
| Educational Studies in… | 1 |
| Educational and Psychological… | 1 |
| Evaluation in Education:… | 1 |
| Review of Educational Research | 1 |
Author
| Echternacht, Gary | 2 |
| Atkins, Warren J. | 1 |
| Bauer, Daniel | 1 |
| Bruno, James E. | 1 |
| Bulut, Okan | 1 |
| Donlon, Thomas F. | 1 |
| Drasgow, Fritz | 1 |
| Echternacht, Gary J. | 1 |
| Fischer, Martin R. | 1 |
| Fitzpatrick, Anne R. | 1 |
| Gierl, Mark J. | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 10 |
| Journal Articles | 3 |
| Speeches/Meeting Papers | 3 |
| Guides - Classroom - Learner | 1 |
| Information Analyses | 1 |
Education Level
Audience
| Researchers | 1 |
Location
Laws, Policies, & Programs
Assessments and Surveys
| Armed Services Vocational… | 1 |
What Works Clearinghouse Rating
Lahner, Felicitas-Maria; Lörwald, Andrea Carolin; Bauer, Daniel; Nouns, Zineb Miriam; Krebs, René; Guttormsen, Sissel; Fischer, Martin R.; Huwendiek, Sören – Advances in Health Sciences Education, 2018
Multiple true-false (MTF) items are a widely used supplement to the commonly used single-best answer (Type A) multiple choice format. However, an optimal scoring algorithm for MTF items has not yet been established, as existing studies yielded conflicting results. Therefore, this study analyzes two questions: What is the optimal scoring algorithm…
Descriptors: Scoring Formulas, Scoring Rubrics, Objective Tests, Multiple Choice Tests
Gierl, Mark J.; Bulut, Okan; Guo, Qi; Zhang, Xinxin – Review of Educational Research, 2017
Multiple-choice testing is considered one of the most effective and enduring forms of educational assessment that remains in practice today. This study presents a comprehensive review of the literature on multiple-choice testing in education focused, specifically, on the development, analysis, and use of the incorrect options, which are also…
Descriptors: Multiple Choice Tests, Difficulty Level, Accuracy, Error Patterns
Peer reviewedSerlin, Ronald C.; Kaiser, Henry F. – Educational and Psychological Measurement, 1978
When multiple-choice tests are scored in the usual manner, giving each correct answer one point, information concerning response patterns is lost. A method for utilizing this information is suggested. An example is presented and compared with two conventional methods of scoring. (Author/JKS)
Descriptors: Correlation, Factor Analysis, Item Analysis, Multiple Choice Tests
Peer reviewedKane, Michael; Moloney, James – Applied Psychological Measurement, 1978
The answer-until-correct (AUC) procedure requires that examinees respond to a multi-choice item until they answer it correctly. Using a modified version of Horst's model for examinee behavior, this paper compares the effect of guessing on item reliability for the AUC procedure and the zero-one scoring procedure. (Author/CTM)
Descriptors: Guessing (Tests), Item Analysis, Mathematical Models, Multiple Choice Tests
Lowry, Stephen R. – 1979
A specially designed answer format was used for three tests in a college level agriculture class of 19 students to record responses to three things about each item: (1) the student's choice of the best answer; (2) the degree of certainty with which the answer was chosen; and (3) all the answer choices which the student was certain were incorrect.…
Descriptors: Achievement Tests, Confidence Testing, Guessing (Tests), Higher Education
PDF pending restorationKane, Michael T.; Moloney, James M. – 1976
The Answer-Until-Correct (AUC) procedure has been proposed in order to increase the reliability of multiple-choice items. A model for examinees' behavior when they must respond to each item until they answer it correctly is presented. An expression for the reliability of AUC items, as a function of the characteristics of the item and the scoring…
Descriptors: Guessing (Tests), Item Analysis, Mathematical Models, Multiple Choice Tests
Donlon, Thomas F.; Fitzpatrick, Anne R. – 1978
On the basis of past research efforts to improve multiple-choice test information through differential weighting of responses to wrong answers (distractors), two statistical indices are developed. Each describes the properties of response distributions across the options of an item. Jaspen's polyserial generalization of the biserial correlation…
Descriptors: Confidence Testing, Difficulty Level, Guessing (Tests), High Schools
Sabers, Darrell L.; White, Gordon W. – 1971
A procedure for scoring multiple-choice tests by assigning different weights to every option of a test item is investigated. The weighting method used was based on that proposed by Davis, which involves taking the upper and lower 27% of a sample, according to some criterion measure, and using the percentages of these groups marking an item option…
Descriptors: Computer Oriented Programs, Item Analysis, Measurement Techniques, Multiple Choice Tests
Peer reviewedAtkins, Warren J.; And Others – Educational Studies in Mathematics, 1991
Results from the Australian Mathematics Competition for 1988 and 1989 (n=309,443 and n=331,660) were analyzed on three statistical measures to determine risk-taking tendencies by groups of students classified by gender, school year, and achievement level. Results showed statistically significant differences for gender but varied depending on the…
Descriptors: Distractors (Tests), Guessing (Tests), Item Analysis, Mathematics Achievement
Drasgow, Fritz; And Others – 1987
This paper addresses the information revealed in incorrect option selection on multiple choice items. Multilinear Formula Scoring (MFS), a theory providing methods for solving psychological measurement problems of long standing, is first used to estimate option characteristic curves for the Armed Services Vocational Aptitude Battery Arithmetic…
Descriptors: Aptitude Tests, Item Analysis, Latent Trait Theory, Mathematical Models
Wood, Robert – Evaluation in Education: International Progress, 1977
The author surveys literature and practice, primarily in Great Britain and the United States, about multiple-choice testing, comments on criticisms, and defends the state of the art. Varous item types, item writing, test instructions and scoring formulas, item analysis, and test construction are discussed. An extensive bibliography is appended.…
Descriptors: Achievement Tests, Item Analysis, Multiple Choice Tests, Scoring Formulas
Lenel, Julia C.; Gilmer, Jerry S. – 1986
In some testing programs an early item analysis is performed before final scoring in order to validate the intended keys. As a result, some items which are flawed and do not discriminate well may be keyed so as to give credit to examinees no matter which answer was chosen. This is referred to as allkeying. This research examined how varying the…
Descriptors: Equated Scores, Item Analysis, Latent Trait Theory, Licensing Examinations (Professions)
Echternacht, Gary – 1973
Estimates for the variance of empirically determined scoring weights are given. It is shown that test item writers should write distractors that discriminate on the criterion variable when this type of scoring is used. (Author)
Descriptors: Item Analysis, Measurement Techniques, Multiple Choice Tests, Performance Criteria
Pascale, Pietro J. – 1971
This brief review explains some alternate scoring procedures to the classical method of summing correct responses. The novel procedures attempt in some way to retrieve and use even the information in the wrong responses. (Author)
Descriptors: Cognitive Processes, Computer Oriented Programs, Confidence Testing, Educational Diagnosis
Echternacht, Gary – 1973
This study compares various item option scoring methods with respect to coefficient alpha and a concurrent validity coefficient. The scoring methods under consideration were: (1) formula scoring, (2) a priori scoring, (3) empirical scoring with an internal criterion, and (4) two modifications of formula scoring. The study indicates a clear…
Descriptors: Item Analysis, Measurement Techniques, Multiple Choice Tests, Performance Criteria
Previous Page | Next Page »
Pages: 1 | 2
Direct link
