Publication Date
| In 2026 | 0 |
| Since 2025 | 81 |
Descriptor
Source
Author
| Ayesha Juhi | 2 |
| Himel Mondal | 2 |
| Pratima Gupta | 2 |
| A. Lichtenberger | 1 |
| A. Vaterlaus | 1 |
| Abderrazak Zaafour | 1 |
| Abdullah Al Fraidan | 1 |
| Abdullah Ali Khan | 1 |
| Abhimanyu Ganguly | 1 |
| Abner Rubin | 1 |
| Agus Widyantoro | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 78 |
| Journal Articles | 76 |
| Tests/Questionnaires | 8 |
| Reports - Evaluative | 2 |
| Speeches/Meeting Papers | 2 |
| Books | 1 |
| Information Analyses | 1 |
Education Level
| Higher Education | 47 |
| Postsecondary Education | 47 |
| Secondary Education | 18 |
| Elementary Education | 7 |
| Middle Schools | 7 |
| High Schools | 6 |
| Junior High Schools | 6 |
| Grade 8 | 3 |
| Elementary Secondary Education | 1 |
| Grade 11 | 1 |
| Grade 12 | 1 |
| More ▼ | |
Audience
| Administrators | 1 |
| Teachers | 1 |
Location
| Indonesia | 7 |
| Germany | 4 |
| India | 4 |
| United Kingdom | 4 |
| China | 3 |
| Thailand | 3 |
| Switzerland | 2 |
| Australia | 1 |
| Bosnia and Herzegovina | 1 |
| Cambodia | 1 |
| Canada | 1 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
| National Assessment of… | 3 |
| Program for International… | 2 |
| ACT Assessment | 1 |
| Force Concept Inventory | 1 |
| International English… | 1 |
| Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Leonidas Zotos; Hedderik van Rijn; Malvina Nissim – International Educational Data Mining Society, 2025
In an educational setting, an estimate of the difficulty of Multiple-Choice Questions (MCQs), a commonly used strategy to assess learning progress, constitutes very useful information for both teachers and students. Since human assessment is costly from multiple points of view, automatic approaches to MCQ item difficulty estimation are…
Descriptors: Multiple Choice Tests, Test Items, Difficulty Level, Artificial Intelligence
A Two-Tier Multiple-Choice Diagnostic Test to Find Student Misconceptions about the Change of Matter
Rita Arfi Astuti Ningroom; Sri Yamtinah; Riyadi – Journal of Education and Learning (EduLearn), 2025
There are a lot of very interesting scientific concepts to learn in natural and social science. The initial concepts that the student possesses may contradict the actual concepts, which is what causes misconceptions. Misconceptions are identified using misconception detection test tools. In fact, the development of the use of diagnostic test…
Descriptors: Foreign Countries, Test Construction, Diagnostic Tests, Multiple Choice Tests
Kentaro Fukushima; Nao Uchida; Kensuke Okada – Journal of Educational and Behavioral Statistics, 2025
Diagnostic tests are typically administered in a multiple-choice (MC) format due to their advantages of objectivity and time efficiency. The MC-deterministic input, noisy "and" gate (DINA) family of models, a representative class of cognitive diagnostic models for MC items, efficiently and parsimoniously estimates the mastery profiles of…
Descriptors: Diagnostic Tests, Cognitive Measurement, Multiple Choice Tests, Educational Assessment
Kala Krishna; Pelin Akyol; Esma Ozer – National Bureau of Economic Research, 2025
Exams are designed to rank students objectively by their abilities, including elements such as time limits, the number and difficulty of questions, and negative marking policies. Using data from a lab-in-field experiment, we develop and estimate a model of student behavior in multiple-choice exams that incorporates the effects of time constraints…
Descriptors: Multiple Choice Tests, Student Behavior, Response Style (Tests), Time
Aiman Mohammad Freihat; Omar Saleh Bani Yassin – Educational Process: International Journal, 2025
Background/purpose: This study aimed to reveal the accuracy of estimation of multiple-choice test items parameters following the models of the item-response theory in measurement. Materials/methods: The researchers depended on the measurement accuracy indicators, which express the absolute difference between the estimated and actual values of the…
Descriptors: Accuracy, Computation, Multiple Choice Tests, Test Items
David Hope; David Kluth; Matthew Homer; Avril Dewar; Rikki Goddard-Fuller; Alan Jaap; Helen Cameron – Advances in Health Sciences Education, 2025
Rasch modelling is a powerful tool for evaluating item performance, measuring drift in difficulty over time, and comparing students who sat assessments at different times or at different sites. Here, we use data from thirty UK medical schools to describe the benefits of Rasch modelling in quality assurance and the barriers to using it. Sixty…
Descriptors: Item Response Theory, Medical Schools, Foreign Countries, Quality Assurance
Kashinath Boral; Krishna Kanta Mondal – Journal of Educational Technology Systems, 2025
This study evaluates the performance of three leading AI chatbots--OpenAI's ChatGPT, Google's Gemini, and Microsoft Bing Copilot--in answering multiple choice questions (MCQs) from the UGC-NET Education paper. Using 150 randomly selected questions from examination cycles between June 2019 and December 2023, the chatbots' accuracy was assessed…
Descriptors: Artificial Intelligence, Technology Uses in Education, Multiple Choice Tests, Program Effectiveness
Brent A. Stevenor; Nadine LeBarron McBride; Charles Anyanwu – Journal of Applied Testing Technology, 2025
Enemy items are two test items that should not be presented to a candidate on the same test. Identifying enemies is essential for personnel assessment, as they weaken the measurement precision and validity of a test. In this research, we examined the effectiveness of lexical and semantic natural language processing techniques for identifying enemy…
Descriptors: Test Items, Natural Language Processing, Occupational Tests, Test Construction
Jessica D. Kotik; Merilee McCurdy; Christopher H. Skinner; Chelsea S. Wilson; Valerie E. Hogan-Sandi; Catherine Smith; Brian E. Wilhoit – Psychology in the Schools, 2025
Inclusive Postsecondary Education programs are designed to provide students with disabilities the opportunity to seek postsecondary education while receiving explicit instruction and support in social and emotional, independent living, and vocational skills. While students enroll in college classes, there is little research available for how to…
Descriptors: Students with Disabilities, Intellectual Disability, Developmental Disabilities, Intervention
Victoria Crisp; Sylvia Vitello; Abdullah Ali Khan; Heather Mahy; Sarah Hughes – Research Matters, 2025
This research set out to enhance our understanding of the exam techniques and types of written annotations or markings that learners may wish to use to support their thinking when taking digital multiple-choice exams. Additionally, we aimed to further explore issues around the factors that contribute to learners writing less rough work and…
Descriptors: Computer Assisted Testing, Test Format, Multiple Choice Tests, Notetaking
Christian Myles; Laura Gorman; James F. X. Jones – Anatomical Sciences Education, 2025
Textbook anatomy depiction of the hepatobiliary tree is present in 55%-62% of the population. Misidentification of hepatobiliary variants can lead to bile duct injuries in cholecystectomies. A better understanding of variants has been cited as a key area for improvement in anatomy education. The aim of this study was to compare the effectiveness…
Descriptors: Computer Peripherals, Printing, Science Instruction, Teaching Methods
Anupkumar D. Dhanvijay; Amita Kumari; Mohammed Jaffer Pinjar; Anita Kumari; Abhimanyu Ganguly; Ankita Priya; Ayesha Juhi; Pratima Gupta; Himel Mondal – Advances in Physiology Education, 2025
Multiple-choice questions (MCQs) are widely used for assessment in medical education. While human-generated MCQs benefit from pedagogical insight, creating high-quality items is time intensive. With the advent of artificial intelligence (AI), tools like DeepSeek R1 offer potential for automated MCQ generation, though their educational validity…
Descriptors: Multiple Choice Tests, Physiology, Artificial Intelligence, Test Items
Andreea Dutulescu; Stefan Ruseti; Denis Iorga; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2025
Automated multiple-choice question (MCQ) generation is valuable for scalable assessment and enhanced learning experiences. How-ever, existing MCQ generation methods face challenges in ensuring plausible distractors and maintaining answer consistency. This paper intro-duces a method for MCQ generation that integrates reasoning-based explanations…
Descriptors: Automation, Computer Assisted Testing, Multiple Choice Tests, Natural Language Processing
Laura S. Kabiri; Thomas M. McCabe; Catherine R. Barber; Augusto X. Rodriguez – HAPS Educator, 2025
Student-centered teaching techniques and equitable assessments are crucial in high-quality education. Asking students to write multiple-choice questions (MCQs) over course content and including them on exams may address both concurrently. Much of the work surveying students' responses to this practice involved graduate level education in…
Descriptors: Learner Engagement, Student Satisfaction, Student Developed Materials, Multiple Choice Tests
Alan D. Koenig; John J. Lee; Eric Savitsky; Gabriele Nataneli; Karson Lindstrom; David L. Schriger; Tyler Savitsky – National Center for Research on Evaluation, Standards, and Student Testing (CRESST), 2025
The urgent need to train military and civilian responders in combat casualty care during large-scale operations presents challenges due to the variability of learner preparedness and the resource demands of traditional curriculum development. This study examines the application of generative artificial intelligence (AI) in authoring and evaluating…
Descriptors: Artificial Intelligence, Computer Uses in Education, Medical Education, Multiple Choice Tests

Peer reviewed
Direct link
