Publication Date
| In 2026 | 0 |
| Since 2025 | 10 |
| Since 2022 (last 5 years) | 40 |
| Since 2017 (last 10 years) | 118 |
| Since 2007 (last 20 years) | 211 |
Descriptor
| Multiple Choice Tests | 532 |
| Test Reliability | 532 |
| Test Validity | 302 |
| Test Construction | 238 |
| Test Items | 172 |
| Foreign Countries | 114 |
| Item Analysis | 101 |
| Higher Education | 90 |
| Difficulty Level | 85 |
| Guessing (Tests) | 74 |
| Scoring | 69 |
| More ▼ | |
Source
Author
| Ebel, Robert L. | 10 |
| Frary, Robert B. | 9 |
| Alonzo, Julie | 7 |
| Frisbie, David A. | 6 |
| Irvin, P. Shawn | 6 |
| Lai, Cheng-Fei | 6 |
| Park, Bitnara Jasmine | 6 |
| Tindal, Gerald | 6 |
| Wilcox, Rand R. | 5 |
| Albanese, Mark A. | 4 |
| Biancarosa, Gina | 4 |
| More ▼ | |
Publication Type
Education Level
Audience
| Researchers | 11 |
| Practitioners | 8 |
| Teachers | 5 |
Location
| Indonesia | 17 |
| Turkey | 17 |
| Germany | 8 |
| Iran | 8 |
| Canada | 6 |
| Malaysia | 4 |
| Nigeria | 4 |
| Australia | 3 |
| Florida | 3 |
| Japan | 3 |
| Pakistan | 3 |
| More ▼ | |
Laws, Policies, & Programs
| No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Peer reviewedSerlin, Ronald C.; Kaiser, Henry F. – Educational and Psychological Measurement, 1978
When multiple-choice tests are scored in the usual manner, giving each correct answer one point, information concerning response patterns is lost. A method for utilizing this information is suggested. An example is presented and compared with two conventional methods of scoring. (Author/JKS)
Descriptors: Correlation, Factor Analysis, Item Analysis, Multiple Choice Tests
Peer reviewedColgan, L. H. – International Journal of Mathematical Education in Science and Technology, 1977
Multiple-choice mathematics tests and other measures of achievement in a mathematics course for engineering students were compared. Although the results were highly correlated, there were some discrepancies. (SD)
Descriptors: Achievement Tests, Engineering Education, Higher Education, Mathematics Education
Peer reviewedFrisbie, David A.; Druva, Cynthia A. – Journal of Educational Measurement, 1986
This study was designed to examine the level of dependence within multiple true-false test-item clusters by computing sets of item correlations with data from a test composed of both multiple true-false and multiple-choice items. (Author/LMO)
Descriptors: Cluster Analysis, Correlation, Higher Education, Multiple Choice Tests
Hester, D. A. – Didaskalos, 1976
The question whether the range of assessment for a course in Greek literature in translation can be broadened to include multiple choice tests is explored. An experimental test is described and evaluated. (RM)
Descriptors: Greek Literature, Language Instruction, Multiple Choice Tests, Objective Tests
Peer reviewedKrauft, Conrad C.; Beggs, Donald L. – Journal of Experimental Education, 1973
The purpose of the study was to determine whether a subject weighted (SW) multiple-choice test taking procedure would result in higher and more reliable scores than the conventional (C) multiple-choice test taking procedure in general at different levels of risk taking. (Author)
Descriptors: Attitudes, Educational Research, Multiple Choice Tests, Questionnaires
Peer reviewedCureton, Edward E. – Educational and Psychological Measurement, 1971
A rebuttal of Frary's 1969 article in Educational and Psychological Measurement. (MS)
Descriptors: Error of Measurement, Guessing (Tests), Multiple Choice Tests, Scoring Formulas
Peer reviewedWillson, Victor L. – Educational and Psychological Measurement, 1982
The Serlin-Kaiser procedure is used to complete a principal components solution for scoring weights for all options of a given item. Coefficient alpha is maximized for a given multiple choice test. (Author/GK)
Descriptors: Analysis of Covariance, Factor Analysis, Multiple Choice Tests, Scoring Formulas
Bush, Martin E. – Quality Assurance in Education: An International Perspective, 2006
Purpose: To provide educationalists with an understanding of the key quality issues relating to multiple-choice tests, and a set of guidelines for the quality assurance of such tests. Design/methodology/approach: The discussion of quality issues is structured to reflect the order in which those issues naturally arise. It covers the design of…
Descriptors: Multiple Choice Tests, Test Reliability, Educational Quality, Quality Control
Peer reviewedOosterhof, Albert C.; Glasnapp, Douglas R. – Journal of Experimental Education, 1974
The present study was initiated to investigate the comparability of multiple-choice and true-false item formats when the time necessary to respond to each type of item was equated empirically. (Editor)
Descriptors: Data Analysis, Guessing (Tests), Multiple Choice Tests, Objective Tests
Frary, Robert B. – 1980
Ordinal response modes for multiple choice tests are those under which the examinee marks one or more choices in an effort to identify the correct choice, or include it in a proper subset of the choices. Two ordinal response modes: answer-until-correct, and Coomb's elimination of choices which examinees identify as wrong, were analyzed for scoring…
Descriptors: Guessing (Tests), Multiple Choice Tests, Responses, Scoring
Garvin, Alfred D.; Ralston, Nancy C. – 1970
Confidence Weighting (CW), after Ebel, and Multiple Responding (MR), after Coombs, are compared empirically to determine which improved test reliability more in the case of a course pretest derived from the final examination. It was hypothesized that MR, which purportedly measures partial knowledge, would be more effective than CW, which measures…
Descriptors: College Students, Educational Testing, Multiple Choice Tests, Objective Tests
Kane, Michael T.; Moloney, James M. – 1974
Gilman and Ferry have shown that when the student's score on a multiple choice test is the total number of responses necessary to get all items correct, substantial increases in reliability can occur. In contrast, similar procedures giving partial credit on multiple choice items have resulted in relatively small gains in reliability. The analysis…
Descriptors: Feedback, Guessing (Tests), Multiple Choice Tests, Response Style (Tests)
Schmeiser, Cynthia Board; Whitney, Douglas R. – 1973
Violations of four selected principles of writing multiple-choice items were introduced into an undergraduate religion course mid-term examination. Three of the flaws significantly increased test difficulty. KR-sub-20 values were lower for all of the tests containing the flawed items than for the "good" versions of the items but significantly so…
Descriptors: Item Analysis, Multiple Choice Tests, Research Reports, Test Construction
Peer reviewedFrary, Robert B.; And Others – Journal of Experimental Education, 1977
To date a theoretical basis has not been developed for determining changes in reliability when score points from random guessing are eliminated and those from non-randon guessing are retained. This paper presents a derivation of an expression for the reliability coefficient which displays the effect of deleting score components due to random…
Descriptors: Data Analysis, Guessing (Tests), Multiple Choice Tests, Scoring Formulas
Peer reviewedLord, Frederic M. – Journal of Educational Measurement, 1977
Two approaches for determining the optimal number of choices for a test item, presently in the literature, are compared with two new approaches. (Author)
Descriptors: Forced Choice Technique, Latent Trait Theory, Multiple Choice Tests, Test Items

Direct link
