Publication Date
| In 2026 | 0 |
| Since 2025 | 89 |
| Since 2022 (last 5 years) | 457 |
| Since 2017 (last 10 years) | 1245 |
| Since 2007 (last 20 years) | 2519 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 122 |
| Teachers | 105 |
| Researchers | 64 |
| Students | 46 |
| Administrators | 14 |
| Policymakers | 7 |
| Counselors | 3 |
| Parents | 3 |
Location
| Canada | 134 |
| Turkey | 131 |
| Australia | 123 |
| Iran | 66 |
| Indonesia | 61 |
| Germany | 51 |
| United Kingdom | 51 |
| Taiwan | 46 |
| United States | 43 |
| China | 39 |
| California | 35 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 3 |
| Meets WWC Standards with or without Reservations | 5 |
| Does not meet standards | 6 |
Yelvington, James Yowell; Brady, Raymond G. – Educational Technology, 1979
A study of corrective feedback testing, in which scoring is based upon the number of attempts needed to achieve the correct answer, is compared to single attempt testing. (RAO)
Descriptors: Educational Testing, Feedback, Intermode Differences, Multiple Choice Tests
Abu-Sayf, F. K. – Educational Technology, 1979
Compares methods of scoring multiple-choice tests and discusses right-number scoring, guessing, and omitted items. Test instructions and answer changing are addressed, and attempts to weight test items are reviewed. It is concluded that, since innovations in test scoring are not well-established, the number right method is most appropriate. (RAO)
Descriptors: Guessing (Tests), Multiple Choice Tests, Objective Tests, Scoring
Peer reviewedLove, Thomas E. – Psychometrika, 1997
Presents a latent variable representation for multiple-choice items and option characteristic curves, and proposes a criterion for distractors based on distractor selection ratios. Results allow for testing the criterion from observable data without specifying a parametric form for the characteristic curves. (Author/SLD)
Descriptors: Criteria, Distractors (Tests), Item Response Theory, Multiple Choice Tests
Peer reviewedBao, Lei; Redish, Edward F. – American Journal of Physics, 2001
Explains that multiple-choice tests such as the Force Concept Inventory (FCI) provide useful instruments to probe the distribution of student difficulties on a large scale. Introduces a new method, concentration analysis, to measure how students' responses on multiple-choice questions are distributed. (Contains 18 references.) (Author/YDS)
Descriptors: Evaluation Methods, Higher Education, Multiple Choice Tests, Physics
Peer reviewedTuckman, Bruce W. – NASSP Bulletin, 1993
Essay tests are easily constructed, relatively valid assessments of higher cognitive processes but are harder to score reliably. Teachers using essay tests are advised to follow clearly designed objectives, construct all-inclusive, pilot-tested questions, develop a checklist of specific scoring points and a model answer for each question, and use…
Descriptors: Essay Tests, Multiple Choice Tests, Scoring, Secondary Education
Peer reviewedMorrison, Susan; Free, Kathleen Walsh – Journal of Nursing Education, 2001
Presents guidelines for developing multiple-choice tests to measure critical thinking in nursing. Explains the rationale for test items and describes item criteria, including measurement of cognition at the application level and above, multilogical thinking, and high level of discrimination. (Contains 38 references.) (SK)
Descriptors: Critical Thinking, Guidelines, Higher Education, Multiple Choice Tests
Peer reviewedMasters, Joan C.; Hulsmeyer, Barbara S.; Pike, Mary E.; Leichty, Kathy; Miller, Margaret T.; Verst, Amy L. – Journal of Nursing Education, 2001
A sample of 2,913 questions from 17 nursing test banks was evaluated for adherence to multiple-choice guidelines, cognitive level in Bloom's Taxonomy, and distribution of correct answers. Analysis revealed 2,233 guideline violations; 47.3% of items were written at the knowledge level, 6.5% at the analysis level; and correct answers were evenly…
Descriptors: Higher Education, Item Analysis, Item Banks, Multiple Choice Tests
Peer reviewedBerube, Clair T. – Clearing House, 2004
The National Standards movement seeks to raise the quality of the American educational system. According to one of its chief architects, Diane Ravitch (2000), national standards give clear expectations for students, teachers, parents, colleges, and employers that will result in improved student achievement. Forty-nine of the fifty states (save…
Descriptors: Standardized Tests, Multiple Choice Tests, Scores, State Standards
Dickson, K. Laurie; Miller, Michelle D. – Teaching of Psychology, 2005
We experimentally investigated the effect of authorized crib cards on undergraduates' multiple-choice exam performance for lower order and higher order questions and on anxiety levels in an upper division child and adolescent development course. Students (N =54) in 2 sections could use crib cards during 2 of the 4 exams. Despite student…
Descriptors: Undergraduate Students, Testing, Student Attitudes, Test Anxiety
Castor, Theresa – New Directions for Teaching and Learning, 2004
In the project described in this chapter, discussions during group exams were analyzed to gain insight into how students make group decisions for their answers and the implications of their decision-making methods for deepening student understanding.
Descriptors: Group Testing, Group Discussion, Discussion (Teaching Technique), Decision Making
Olson, Lynn – Education Week, 2005
Twenty-three states are expanding their testing programs to additional grades this school year to comply with the federal No Child Left Behind Act. In devising the new tests, most states have defied predictions and chosen to go beyond multiple-choice items, by including questions that ask students to construct their own responses. But many state…
Descriptors: Testing Programs, Federal Legislation, Measurement, Multiple Choice Tests
Sinharay, Sandip; Johnson, Matthew S.; Williamson, David M. – Journal of Educational and Behavioral Statistics, 2003
Item families, which are groups of related items, are becoming increasingly popular in complex educational assessments. For example, in automatic item generation (AIG) systems, a test may consist of multiple items generated from each of a number of item models. Item calibration or scoring for such an assessment requires fitting models that can…
Descriptors: Test Items, Markov Processes, Educational Testing, Probability
Wentzel, Carolyn – Journal of Science Education and Technology, 2006
INTEGRITY, an item analysis and statistical collusion detection (answer copying) online application, was reviewed. Features of the software and examples of program output are described in detail. INTEGRITY was found to be easily utilized with an abundance of well-organized documentation and built-in features designed to guide the user through the…
Descriptors: Item Analysis, Computer Software, Multiple Choice Tests, Costs
van der Linden, Wim J.; Sotaridona, Leonardo – Journal of Educational and Behavioral Statistics, 2006
A statistical test for detecting answer copying on multiple-choice items is presented. The test is based on the exact null distribution of the number of random matches between two test takers under the assumption that the response process follows a known response model. The null distribution can easily be generalized to the family of distributions…
Descriptors: Test Items, Multiple Choice Tests, Cheating, Responses
Chou, Chien; Chan, Pei-Shan; Wu, Huan-Chueh – British Journal of Educational Technology, 2007
The purpose of this study is to explore students' understanding of cyber copyright laws. This study developed a two-tier test with 10 two-level multiple-choice questions. The first tier presented a real-case scenario and asked whether the conduct was acceptable whereas the second-tier provided reasons to justify the conduct. Students in Taiwan…
Descriptors: College Students, Copyrights, Foreign Countries, Internet

Direct link
