Publication Date
| In 2026 | 0 |
| Since 2025 | 26 |
| Since 2022 (last 5 years) | 144 |
| Since 2017 (last 10 years) | 357 |
| Since 2007 (last 20 years) | 584 |
Descriptor
| Multiple Choice Tests | 1154 |
| Test Items | 1154 |
| Test Construction | 414 |
| Foreign Countries | 336 |
| Difficulty Level | 298 |
| Test Format | 260 |
| Item Analysis | 244 |
| Item Response Theory | 177 |
| Test Reliability | 172 |
| Higher Education | 162 |
| Test Validity | 161 |
| More ▼ | |
Source
Author
| Haladyna, Thomas M. | 14 |
| Plake, Barbara S. | 8 |
| Samejima, Fumiko | 8 |
| Downing, Steven M. | 7 |
| Bennett, Randy Elliot | 6 |
| Cheek, Jimmy G. | 6 |
| Huntley, Renee M. | 6 |
| Katz, Irvin R. | 6 |
| Kim, Sooyeon | 6 |
| McGhee, Max B. | 6 |
| Suh, Youngsuk | 6 |
| More ▼ | |
Publication Type
Education Level
Audience
| Practitioners | 40 |
| Students | 30 |
| Teachers | 28 |
| Researchers | 26 |
| Administrators | 5 |
| Counselors | 1 |
Location
| Canada | 62 |
| Australia | 37 |
| Turkey | 29 |
| Indonesia | 22 |
| Germany | 14 |
| Iran | 11 |
| Nigeria | 11 |
| Malaysia | 10 |
| China | 9 |
| Taiwan | 9 |
| United Kingdom | 9 |
| More ▼ | |
Laws, Policies, & Programs
| No Child Left Behind Act 2001 | 4 |
| National Defense Education Act | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
| Does not meet standards | 1 |
Emily K. Toutkoushian; Huaping Sun; Mark T. Keegan; Ann E. Harman – Measurement: Interdisciplinary Research and Perspectives, 2024
Linear logistic test models (LLTMs), leveraging item response theory and linear regression, offer an elegant method for learning about item characteristics in complex content areas. This study used LLTMs to model single-best-answer, multiple-choice-question response data from two medical subspecialty certification examinations in multiple years…
Descriptors: Licensing Examinations (Professions), Certification, Medical Students, Test Items
Lae Lae Shwe; Sureena Matayong; Suntorn Witosurapot – Education and Information Technologies, 2024
Multiple Choice Questions (MCQs) are an important evaluation technique for both examinations and learning activities. However, the manual creation of questions is time-consuming and challenging for teachers. Hence, there is a notable demand for an Automatic Question Generation (AQG) system. Several systems have been created for this aim, but the…
Descriptors: Difficulty Level, Computer Assisted Testing, Adaptive Testing, Multiple Choice Tests
Gorney, Kylie – ProQuest LLC, 2023
Aberrant behavior refers to any type of unusual behavior that would not be expected under normal circumstances. In educational and psychological testing, such behaviors have the potential to severely bias the aberrant examinee's test score while also jeopardizing the test scores of countless others. It is therefore crucial that aberrant examinees…
Descriptors: Behavior Problems, Educational Testing, Psychological Testing, Test Bias
Thompson, Kathryn N. – ProQuest LLC, 2023
It is imperative to collect validity evidence prior to interpreting and using test scores. During the process of collecting validity evidence, test developers should consider whether test scores are contaminated by sources of extraneous information. This is referred to as construct irrelevant variance, or the "degree to which test scores are…
Descriptors: Test Wiseness, Test Items, Item Response Theory, Scores
Tabuena, Almighty C.; Morales, Glinore S. – Online Submission, 2021
This study identified and annotated appropriate test items using the multiple-choice test item format in the cognitive domain of the taxonomy of educational objectives in assessing and evaluating musical learning through the descriptive-developmental research design. This assessment approach is one of the key skills needed of Music teachers to…
Descriptors: Multiple Choice Tests, Test Items, Cognitive Objectives, Taxonomy
Sebastian Moncaleano – ProQuest LLC, 2021
The growth of computer-based testing over the last two decades has motivated the creation of innovative item formats. It is often argued that technology-enhanced items (TEIs) provide better measurement of test-takers' knowledge, skills, and abilities by increasing the authenticity of tasks presented to test-takers (Sireci & Zenisky, 2006).…
Descriptors: Computer Assisted Testing, Test Format, Test Items, Classification
Mehmet Kanik – International Journal of Assessment Tools in Education, 2024
ChatGPT has surged interest to cause people to look for its use in different tasks. However, before allowing it to replace humans, its capabilities should be investigated. As ChatGPT has potential for use in testing and assessment, this study aims to investigate the questions generated by ChatGPT by comparing them to those written by a course…
Descriptors: Artificial Intelligence, Testing, Multiple Choice Tests, Test Construction
Arandha May Rachmawati; Agus Widyantoro – English Language Teaching Educational Journal, 2025
This study aims to evaluate the quality of English reading comprehension test instruments used in informal learning, especially as English literacy tests. With a quantitative approach, the analysis was carried out using the Rasch model through the Quest program on 30 multiple-choice questions given to 30 grade IX students from informal educational…
Descriptors: Item Response Theory, Reading Tests, Reading Comprehension, English (Second Language)
Falcão, Filipe; Costa, Patrício; Pêgo, José M. – Advances in Health Sciences Education, 2022
Background: Current demand for multiple-choice questions (MCQs) in medical assessment is greater than the supply. Consequently, an urgency for new item development methods arises. Automatic Item Generation (AIG) promises to overcome this burden, generating calibrated items based on the work of computer algorithms. Despite the promising scenario,…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Test Items, Medical Education
Acikgul, Kubra; Sad, Suleyman Nihat; Altay, Bilal – International Journal of Assessment Tools in Education, 2023
This study aimed to develop a useful test to measure university students' spatial abilities validly and reliably. Following a sequential explanatory mixed methods research design, first, qualitative methods were used to develop the trial items for the test; next, the psychometric properties of the test were analyzed through quantitative methods…
Descriptors: Spatial Ability, Scores, Multiple Choice Tests, Test Validity
McGuire, Michael J. – International Journal for the Scholarship of Teaching and Learning, 2023
College students in a lower-division psychology course made metacognitive judgments by predicting and postdicting performance for true-false, multiple-choice, and fill-in-the-blank question sets on each of three exams. This study investigated which question format would result in the most accurate metacognitive judgments. Extending Koriat's (1997)…
Descriptors: Metacognition, Multiple Choice Tests, Accuracy, Test Format
Laura Kuusemets; Kristin Parve; Kati Ain; Tiina Kraav – International Journal of Education in Mathematics, Science and Technology, 2024
Using multiple-choice questions as learning and assessment tools is standard at all levels of education. However, when discussing the positive and negative aspects of their use, the time and complexity involved in producing plausible distractor options emerge as a disadvantage that offsets the time savings in relation to feedback. The article…
Descriptors: Program Evaluation, Artificial Intelligence, Computer Assisted Testing, Man Machine Systems
Lance Shultz – ProQuest LLC, 2024
Multiple-true-false (MTF) assessments can provide granular feedback on course materials, which stems from the format of the MTF question and helps to enhance student understanding and illuminates misconceptions that can be hidden with other assessment types (Brassil & Couch, 2019). The purpose of this study was to document how students use and…
Descriptors: Objective Tests, Multiple Choice Tests, Test Items, Student Evaluation
Ute Mertens; Marlit A. Lindner – Journal of Computer Assisted Learning, 2025
Background: Educational assessments increasingly shift towards computer-based formats. Many studies have explored how different types of automated feedback affect learning. However, few studies have investigated how digital performance feedback affects test takers' ratings of affective-motivational reactions during a testing session. Method: In…
Descriptors: Educational Assessment, Computer Assisted Testing, Automation, Feedback (Response)
Andrew M. Olney – Grantee Submission, 2023
Multiple choice questions are traditionally expensive to produce. Recent advances in large language models (LLMs) have led to fine-tuned LLMs that generate questions competitive with human-authored questions. However, the relative capabilities of ChatGPT-family models have not yet been established for this task. We present a carefully-controlled…
Descriptors: Test Construction, Multiple Choice Tests, Test Items, Algorithms

Peer reviewed
Direct link
