Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 1 |
| Since 2017 (last 10 years) | 4 |
| Since 2007 (last 20 years) | 8 |
Descriptor
| Difficulty Level | 19 |
| Multiple Choice Tests | 19 |
| Scoring | 19 |
| Test Items | 14 |
| Test Construction | 7 |
| Test Reliability | 7 |
| Higher Education | 5 |
| Test Format | 5 |
| Computer Assisted Testing | 4 |
| Item Response Theory | 4 |
| Responses | 4 |
| More ▼ | |
Source
Author
Publication Type
| Reports - Research | 9 |
| Journal Articles | 8 |
| Reports - Evaluative | 4 |
| Dissertations/Theses -… | 2 |
| Information Analyses | 2 |
| Reports - Descriptive | 2 |
| Speeches/Meeting Papers | 2 |
| Guides - Non-Classroom | 1 |
| Tests/Questionnaires | 1 |
Education Level
| Elementary Education | 2 |
| Elementary Secondary Education | 2 |
| Secondary Education | 2 |
| Grade 6 | 1 |
| Grade 7 | 1 |
| High Schools | 1 |
| Junior High Schools | 1 |
| Middle Schools | 1 |
Audience
Location
| California | 1 |
| Canada | 1 |
| Florida | 1 |
| Germany | 1 |
| Tennessee | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| English Proficiency Test | 1 |
| National Assessment of… | 1 |
What Works Clearinghouse Rating
Alicia A. Stoltenberg – ProQuest LLC, 2024
Multiple-select multiple-choice items, or multiple-choice items with more than one correct answer, are used to quickly assess content on standardized assessments. Because there are multiple keys to these item types, there are also multiple ways to score student responses to these items. The purpose of this study was to investigate how changing the…
Descriptors: Scoring, Evaluation Methods, Multiple Choice Tests, Standardized Tests
Lin, Chih-Kai – Language Assessment Quarterly, 2018
With multiple options to choose from, there is always a chance of lucky guessing by examinees on multiple-choice (MC) items, thereby potentially introducing bias in item difficulty estimates. Correct responses by random guessing thus pose threats to the validity of claims made from test performance on an MC test. Under the Rasch framework, the…
Descriptors: Guessing (Tests), Item Response Theory, Multiple Choice Tests, Language Tests
Eckerly, Carol; Smith, Russell; Sowles, John – Practical Assessment, Research & Evaluation, 2018
The Discrete Option Multiple Choice (DOMC) item format was introduced by Foster and Miller (2009) with the intent of improving the security of test content. However, by changing the amount and order of the content presented, the test taking experience varies by test taker, thereby introducing potential fairness issues. In this paper we…
Descriptors: Culture Fair Tests, Multiple Choice Tests, Testing, Test Items
Kurniawan, Muhammad Ali; Rahayu, Sri; Fajaroh, Fauziatul; Almuntasheri, Saeed – Journal of Science Learning, 2020
This study aims to compare the impact of the Dual Situated Learning Model (DSLM) and conventional instructions in improving High School Students' understanding of chemical equilibrium concepts and the prevention of possible misconceptions. The study utilized a quasi-pretest-posttest control and experimental group design with two classes of XI SMAN…
Descriptors: Situated Learning, High School Students, Chemistry, Scientific Concepts
Rahman, Nazia – ProQuest LLC, 2013
Samejima hypothesized that non-monotonically increasing item response functions (IRFs) of ability might occur for multiple-choice items (referred to here as "Samejima items") if low ability test takers with some, though incomplete, knowledge or skill are drawn to a particularly attractive distractor, while very low ability test takers…
Descriptors: Multiple Choice Tests, Test Items, Item Response Theory, Probability
Liu, Ou Lydia; Lee, Hee-Sun; Linn, Marcia C. – Educational Assessment, 2011
Both multiple-choice and constructed-response items have known advantages and disadvantages in measuring scientific inquiry. In this article we explore the function of explanation multiple-choice (EMC) items and examine how EMC items differ from traditional multiple-choice and constructed-response items in measuring scientific reasoning. A group…
Descriptors: Science Tests, Multiple Choice Tests, Responses, Test Items
Bauer, Daniel; Holzer, Matthias; Kopp, Veronika; Fischer, Martin R. – Advances in Health Sciences Education, 2011
To compare different scoring algorithms for Pick-N multiple correct answer multiple-choice (MC) exams regarding test reliability, student performance, total item discrimination and item difficulty. Data from six 3rd year medical students' end of term exams in internal medicine from 2005 to 2008 at Munich University were analysed (1,255 students,…
Descriptors: Medical Students, Test Reliability, Internal Medicine, Scoring
Sawchuk, Stephen – Education Digest: Essential Readings Condensed for Quick Review, 2010
Most experts in the testing community have presumed that the $350 million promised by the U.S. Department of Education to support common assessments would promote those that made greater use of open-ended items capable of measuring higher-order critical-thinking skills. But as measurement experts consider the multitude of possibilities for an…
Descriptors: Educational Quality, Test Items, Comparative Analysis, Multiple Choice Tests
Arnold, J. C. – J Exp Educ, 1969
Descriptors: Difficulty Level, Guessing (Tests), Mathematical Models, Methods
Peer reviewedFeldt, Leonard S. – Applied Measurement in Education, 1993
The recommendation that the reliability of multiple-choice tests will be enhanced if the distribution of item difficulties is concentrated at approximately 0.50 is reinforced and extended in this article by viewing the 0/1 item scoring as a dichotomization of an underlying normally distributed ability score. (SLD)
Descriptors: Ability, Difficulty Level, Guessing (Tests), Mathematical Models
PDF pending restorationAnderson, Paul S.; Hyers, Albert D. – 1991
Three descriptive statistics (difficulty, discrimination, and reliability) of multiple-choice (MC) test items were compared to those of a new (1980s) format of machine-scored questions. The new method, answer-bank multi-digit testing (MDT), uses alphabetized lists of up to 1,000 alternatives and approximates the completion style of assessment…
Descriptors: College Students, Comparative Testing, Computer Assisted Testing, Correlation
Meredith, John B., Jr. – 1978
The complexity of defining accurate passing scores with a minimum classification error when evaluating criterion-referenced, multiple-choice tests has been a major problem for classroom teachers. Therefore, a practical procedure in which the instructor determines the plausibility of each item option for the minimally acceptable examinee is…
Descriptors: Criterion Referenced Tests, Cutting Scores, Difficulty Level, Item Analysis
Peer reviewedFrisbie, David A. – Educational Measurement: Issues and Practice, 1992
Literature related to the multiple true-false (MTF) item format is reviewed. Each answer cluster of a MTF item may have several true items and the correctness of each is judged independently. MTF tests appear efficient and reliable, although they are a bit harder than multiple choice items for examinees. (SLD)
Descriptors: Achievement Tests, Difficulty Level, Literature Reviews, Multiple Choice Tests
Melican, Gerald J.; And Others – 1987
The effects of feedback about the ratings of other judges on subsequent ratings using the Nedelsky method and the ability of judges to retain or eliminate options in a manner consistent with the judgments of minimally competent examinees were studied using data from a basic algebra examination administered to 227 college students in 1987. The…
Descriptors: Certification, College Students, Cutting Scores, Difficulty Level
PDF pending restorationHyers, Albert D.; Anderson, Paul S. – 1991
Using matched pairs of geography questions, a new testing method for machine-scored fill-in-the-blank, multiple-digit testing (MDT) questions was compared to the traditional multiple-choice (MC) style. Data were from 118 matched or parallel test items for 4 tests from 764 college students of geography. The new method produced superior results when…
Descriptors: College Students, Comparative Testing, Computer Assisted Testing, Difficulty Level
Previous Page | Next Page ยป
Pages: 1 | 2
Direct link
