Publication Date
| In 2026 | 0 |
| Since 2025 | 8 |
| Since 2022 (last 5 years) | 57 |
| Since 2017 (last 10 years) | 148 |
| Since 2007 (last 20 years) | 246 |
Descriptor
| Multiple Choice Tests | 526 |
| Test Format | 526 |
| Test Items | 260 |
| Foreign Countries | 145 |
| Test Construction | 139 |
| Higher Education | 115 |
| Difficulty Level | 96 |
| Comparative Analysis | 93 |
| Scores | 86 |
| Test Reliability | 68 |
| Computer Assisted Testing | 64 |
| More ▼ | |
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 25 |
| Teachers | 21 |
| Researchers | 17 |
| Students | 7 |
| Administrators | 1 |
| Parents | 1 |
Location
| Canada | 13 |
| Turkey | 12 |
| Netherlands | 9 |
| Germany | 8 |
| Australia | 6 |
| Japan | 6 |
| California | 5 |
| Iran | 5 |
| South Korea | 5 |
| United Kingdom | 5 |
| China | 4 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Relationship between Free-Response and Choice-Type Tests of Achievement: A Review of the Literature.
Hogan, Thomas P. – 1981
Do choice-type tests (multiple-choice, true-false, etc.) measure the same abilities or traits as free response (essay, recall, completion, etc.) tests? A large number of studies conducted with several different methodologies and spanning a long period of time have addressed this question. In this review, attention will be focused almost…
Descriptors: Achievement Tests, Correlation, Essay Tests, Measurement Techniques
Capell, Frank J.; Quellmalz, Edys S. – 1980
In the area of large scale assessment, there is increasing interest in the measurement of students' written performance. At issue is whether the task demands in writing assessment can be simplified to involve the production of paragraph-length writing samples and/or multiple choice testing, rather than full-length essays. This study considers data…
Descriptors: Essay Tests, Factor Structure, High Schools, Multiple Choice Tests
Peer reviewedWilcox, Rand R.; Wilcox, Karen Thompson – Journal of Educational Measurement, 1988
Use of latent class models to examine strategies that examinees (92 college students) use for a specific task is illustrated, via a multiple-choice test of spatial ability. Under an answer-until-correct scoring procedure, models representing an improvement over simplistic random guessing are proposed. (SLD)
Descriptors: College Students, Decision Making, Guessing (Tests), Multiple Choice Tests
Peer reviewedGuthrie, John T. – Journal of Reading, 1984
Compares multiple choice testing with free response formats in terms of the cognitive operations each type demands. (HOD)
Descriptors: Cognitive Processes, Cognitive Tests, Evaluation Criteria, Measurement Techniques
Peer reviewedHaladyna, Thomas M.; Downing, Steven M. – Applied Measurement in Education, 1989
A taxonomy of 43 rules for writing multiple-choice test items is presented, based on a consensus of 46 textbooks. These guidelines are presented as complete and authoritative, with solid consensus apparent for 33 of the rules. Four rules lack consensus, and 5 rules were cited fewer than 10 times. (SLD)
Descriptors: Classification, Interrater Reliability, Multiple Choice Tests, Objective Tests
Peer reviewedMelancon, Janet G.; Thompson, Bruce – Psychology in the Schools, 1989
Investigated measurement characteristics of both forms of Finding Embedded Figures Test (FEFT). College students (N=302) completed both forms of FEFT or one form of FEFT and Group Embedded Figures Test. Results suggest that FEFT forms provide reasonable reliable and valid data. (Author/NB)
Descriptors: College Students, Field Dependence Independence, Higher Education, Multiple Choice Tests
Peer reviewedKnowles, Susan L.; Welch, Cynthia A. – Educational and Psychological Measurement, 1992
A meta-analysis of the difficulty and discrimination of the "none-of-the-above" (NOTA) test option was conducted with 12 articles (20 effect sizes) for difficulty and 7 studies (11 effect sizes) for discrimination. Findings indicate that using the NOTA option does not result in items of lesser quality. (SLD)
Descriptors: Difficulty Level, Effect Size, Meta Analysis, Multiple Choice Tests
Narloch, Rodger; Garbin, Calvin P.; Turnage, Kimberly D. – Teaching of Psychology, 2006
We investigated the use of quizzes administered prior to lecture (i.e., prelecture quizzes) and compared them to no-quiz control groups. In previous research, the success of administering quizzes after covering a topic (i.e., postlecture quizzes) was contingent on the quizzes and the subsequent exams being of similar level and content. However,…
Descriptors: Test Format, Lecture Method, Multiple Choice Tests, Essay Tests
Huang, Yi-Min; Trevisan, Mike; Storfer, Andrew – International Journal for the Scholarship of Teaching and Learning, 2007
Despite the prevalence of multiple choice items in educational testing, there is a dearth of empirical evidence for multiple choice item writing rules. The purpose of this study was to expand the base of empirical evidence by examining the use of the "all-of-the-above" option in a multiple choice examination in order to assess how…
Descriptors: Multiple Choice Tests, Educational Testing, Ability Grouping, Test Format
Boser, Judith A.; Clark, Sheldon B. – 1990
This study of survey research experts was conducted to determine desirable characteristics of mail questionnaires. The 82-item Likert-scale instrument used in the study covered general appearance, instructions, choice of items, choice of response options, wording, order of items, and item format. The instrument was administered to: 8 subjects who…
Descriptors: Attitude Measures, Item Analysis, Likert Scales, Mail Surveys
Ebel, Robert L. – 1981
An alternate-choice test item is a simple declarative sentence, one portion of which is given with two different wordings. For example, "Foundations like Ford and Carnegie tend to be (1) eager (2) hesitant to support innovative solutions to educational problems." The examinee's task is to choose the alternative that makes the sentence…
Descriptors: Comparative Testing, Difficulty Level, Guessing (Tests), Multiple Choice Tests
Peer reviewedHodson, D. – Research in Science and Technological Education, 1984
Investigated the effect on student performance of changes in question structure and sequence on a GCE 0-level multiple-choice chemistry test. One finding noted is that there was virtually no change in test reliability on reducing the number of options (from five to per test item). (JN)
Descriptors: Academic Achievement, Chemistry, Multiple Choice Tests, Science Education
Fitzpatrick, Anne R. – 2002
This study, one of a series designed to answer practical questions about performance based assessment, examined the comparability of school scores on short, nonparallel test forms. The data were obtained from mathematics tests with both multiple choice (MC) and performance assessment (PA) items. The tests were administered in a statewide testing…
Descriptors: Comparative Analysis, Mathematics Tests, Multiple Choice Tests, Performance Based Assessment
Peer reviewedWilcox, Rand R. – Educational and Psychological Measurement, 1982
When determining criterion-referenced test length, problems of guessing are shown to be more serious than expected. A new method of scoring is presented that corrects for guessing without assuming that guessing is random. Empirical investigations of the procedure are examined. Test length can be substantially reduced. (Author/CM)
Descriptors: Criterion Referenced Tests, Guessing (Tests), Multiple Choice Tests, Scoring
Peer reviewedWeiten, Wayne – Journal of Experimental Education, 1982
A comparison of double as opposed to single multiple-choice questions yielded significant differences in regard to item difficulty, item discrimination, and internal reliability, but not concurrent validity. (Author/PN)
Descriptors: Difficulty Level, Educational Testing, Higher Education, Multiple Choice Tests

Direct link
