Publication Date
| In 2026 | 0 |
| Since 2025 | 13 |
| Since 2022 (last 5 years) | 48 |
| Since 2017 (last 10 years) | 135 |
| Since 2007 (last 20 years) | 229 |
Descriptor
| Difficulty Level | 279 |
| Multiple Choice Tests | 279 |
| Test Items | 194 |
| Foreign Countries | 121 |
| Test Construction | 76 |
| Item Analysis | 69 |
| Test Reliability | 62 |
| Item Response Theory | 61 |
| Test Validity | 54 |
| Test Format | 53 |
| Undergraduate Students | 45 |
| More ▼ | |
Source
Author
| Andrich, David | 3 |
| Atalmis, Erkan Hasan | 3 |
| Cizek, Gregory J. | 3 |
| Fischer, Martin R. | 3 |
| Marais, Ida | 3 |
| Albanese, Mark A. | 2 |
| Bauer, Daniel | 2 |
| Bolt, Daniel M. | 2 |
| Bucak, S. Deniz | 2 |
| Bulut, Okan | 2 |
| Crisp, Victoria | 2 |
| More ▼ | |
Publication Type
| Journal Articles | 279 |
| Reports - Research | 244 |
| Reports - Evaluative | 25 |
| Tests/Questionnaires | 21 |
| Reports - Descriptive | 9 |
| Information Analyses | 6 |
| Speeches/Meeting Papers | 2 |
| Guides - Non-Classroom | 1 |
Education Level
| Higher Education | 101 |
| Postsecondary Education | 81 |
| Secondary Education | 56 |
| Elementary Education | 37 |
| Middle Schools | 24 |
| High Schools | 20 |
| Intermediate Grades | 15 |
| Junior High Schools | 15 |
| Grade 6 | 11 |
| Grade 7 | 10 |
| Grade 8 | 9 |
| More ▼ | |
Audience
| Teachers | 2 |
| Administrators | 1 |
| Practitioners | 1 |
| Researchers | 1 |
Location
| Turkey | 14 |
| Indonesia | 10 |
| Germany | 8 |
| Australia | 7 |
| Canada | 7 |
| Nigeria | 7 |
| Taiwan | 6 |
| Jordan | 5 |
| Malaysia | 4 |
| Thailand | 4 |
| United Kingdom | 4 |
| More ▼ | |
Laws, Policies, & Programs
| No Child Left Behind Act 2001 | 2 |
Assessments and Surveys
What Works Clearinghouse Rating
Aiman Mohammad Freihat; Omar Saleh Bani Yassin – Educational Process: International Journal, 2025
Background/purpose: This study aimed to reveal the accuracy of estimation of multiple-choice test items parameters following the models of the item-response theory in measurement. Materials/methods: The researchers depended on the measurement accuracy indicators, which express the absolute difference between the estimated and actual values of the…
Descriptors: Accuracy, Computation, Multiple Choice Tests, Test Items
Berenbon, Rebecca F.; McHugh, Bridget C. – Educational Measurement: Issues and Practice, 2023
To assemble a high-quality test, psychometricians rely on subject matter experts (SMEs) to write high-quality items. However, SMEs are not typically given the opportunity to provide input on which content standards are most suitable for multiple-choice questions (MCQs). In the present study, we explored the relationship between perceived MCQ…
Descriptors: Test Items, Multiple Choice Tests, Standards, Difficulty Level
Chen, Yun-Zu; Yang, Kai-Lin – Applied Cognitive Psychology, 2023
This study investigated whether the three variables of task form, squares carried, and figural complexity, for designing cube folding tasks, affect sixth graders' cube folding performance. Two task forms were used to develop two versions of "cube folding test." Each version was designed based on two levels of squares carried and three…
Descriptors: Elementary School Students, Grade 6, Geometric Concepts, Task Analysis
E.?B. Merki; S.?I. Hofer; A. Vaterlaus; A. Lichtenberger – Physical Review Physics Education Research, 2025
When describing motion in physics, the selection of a frame of reference is crucial. The graph of a moving object can look quite different based on the frame of reference. In recent years, various tests have been developed to assess the interpretation of kinematic graphs, but none of these tests have specifically addressed differences in reference…
Descriptors: Graphs, Motion, Physics, Secondary School Students
Martin Steinbach; Carolin Eitemüller; Marc Rodemer; Maik Walpuski – International Journal of Science Education, 2025
The intricate relationship between representational competence and content knowledge in organic chemistry has been widely debated, and the ways in which representations contribute to task difficulty, particularly in assessment, remain unclear. This paper presents a multiple-choice test instrument for assessing individuals' knowledge of fundamental…
Descriptors: Organic Chemistry, Difficulty Level, Multiple Choice Tests, Fundamental Concepts
Syed Mujahid Hussain; Aqdas Malik; Nisar Ahmad; Sheraz Ahmed – Journal of Educational Technology Systems, 2025
This study assesses the performance of ChatGPT in comparison with that of undergraduate students in 60 multiple-choice questions (MCQs) of Corporate Finance exams that sought to measure students' abilities to solve different types of questions (descriptive and numerical) and of varying difficulty levels (basic and intermediate). Our results…
Descriptors: Business Education, Finance Occupations, Undergraduate Students, Multiple Choice Tests
Lang, Joseph B. – Journal of Educational and Behavioral Statistics, 2023
This article is concerned with the statistical detection of copying on multiple-choice exams. As an alternative to existing permutation- and model-based copy-detection approaches, a simple randomization p-value (RP) test is proposed. The RP test, which is based on an intuitive match-score statistic, makes no assumptions about the distribution of…
Descriptors: Identification, Cheating, Multiple Choice Tests, Item Response Theory
Ludewig, Ulrich; Schwerter, Jakob; McElvany, Nele – Journal of Psychoeducational Assessment, 2023
A better understanding of how distractor features influence the plausibility of distractors is essential for an efficient multiple-choice (MC) item construction in educational assessment. The plausibility of distractors has a major influence on the psychometric characteristics of MC items. Our analysis utilizes the nominal categories model to…
Descriptors: Vocabulary, Language Tests, German, Grade 4
Emily K. Toutkoushian; Huaping Sun; Mark T. Keegan; Ann E. Harman – Measurement: Interdisciplinary Research and Perspectives, 2024
Linear logistic test models (LLTMs), leveraging item response theory and linear regression, offer an elegant method for learning about item characteristics in complex content areas. This study used LLTMs to model single-best-answer, multiple-choice-question response data from two medical subspecialty certification examinations in multiple years…
Descriptors: Licensing Examinations (Professions), Certification, Medical Students, Test Items
Lae Lae Shwe; Sureena Matayong; Suntorn Witosurapot – Education and Information Technologies, 2024
Multiple Choice Questions (MCQs) are an important evaluation technique for both examinations and learning activities. However, the manual creation of questions is time-consuming and challenging for teachers. Hence, there is a notable demand for an Automatic Question Generation (AQG) system. Several systems have been created for this aim, but the…
Descriptors: Difficulty Level, Computer Assisted Testing, Adaptive Testing, Multiple Choice Tests
Douglas-Morris, Jan; Ritchie, Helen; Willis, Catherine; Reed, Darren – Anatomical Sciences Education, 2021
Multiple-choice (MC) anatomy "spot-tests" (identification-based assessments on tagged cadaveric specimens) offer a practical alternative to traditional free-response (FR) spot-tests. Conversion of the two spot-tests in an upper limb musculoskeletal anatomy unit of study from FR to a novel MC format, where one of five tagged structures on…
Descriptors: Multiple Choice Tests, Anatomy, Test Reliability, Difficulty Level
Arandha May Rachmawati; Agus Widyantoro – English Language Teaching Educational Journal, 2025
This study aims to evaluate the quality of English reading comprehension test instruments used in informal learning, especially as English literacy tests. With a quantitative approach, the analysis was carried out using the Rasch model through the Quest program on 30 multiple-choice questions given to 30 grade IX students from informal educational…
Descriptors: Item Response Theory, Reading Tests, Reading Comprehension, English (Second Language)
Jin, Kuan-Yu; Siu, Wai-Lok; Huang, Xiaoting – Journal of Educational Measurement, 2022
Multiple-choice (MC) items are widely used in educational tests. Distractor analysis, an important procedure for checking the utility of response options within an MC item, can be readily implemented in the framework of item response theory (IRT). Although random guessing is a popular behavior of test-takers when answering MC items, none of the…
Descriptors: Guessing (Tests), Multiple Choice Tests, Item Response Theory, Attention
Thayaamol Upapong; Apantee Poonputta – Educational Process: International Journal, 2025
Background/purpose: The purposes of this research are to develop a reliable and valid assessment tool for measuring systems thinking skills in upper primary students in Thailand and to establish a normative criterion for evaluating their systems thinking abilities based on educational standards. Materials/methods: The study followed a three-phase…
Descriptors: Thinking Skills, Elementary School Students, Measures (Individuals), Foreign Countries
Eka Febri Zulissetiana; Muhammad Irfannuddin; Siti Sarahdeaz Fazzaura Putri; Syifa Alkaf; Susilawati Susilawati; Jihan Marshanda; Ra Fadila Septiany; Hasyimiah Az-Zahra; Robert G. Carroll – Advances in Physiology Education, 2024
Complex subjects such as physiology can be challenging for students to learn. These challenges are not uncommon in implementing the learning process in physiology and affect learning outcomes. Dramatization is an interactive and effective method to improve learning outcomes. In a project designed by senior medical students, junior medical students…
Descriptors: Drama, Teaching Methods, Physiology, Science Instruction

Peer reviewed
Direct link
