NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20260
Since 202581
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 81 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Leonidas Zotos; Hedderik van Rijn; Malvina Nissim – International Educational Data Mining Society, 2025
In an educational setting, an estimate of the difficulty of Multiple-Choice Questions (MCQs), a commonly used strategy to assess learning progress, constitutes very useful information for both teachers and students. Since human assessment is costly from multiple points of view, automatic approaches to MCQ item difficulty estimation are…
Descriptors: Multiple Choice Tests, Test Items, Difficulty Level, Artificial Intelligence
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rita Arfi Astuti Ningroom; Sri Yamtinah; Riyadi – Journal of Education and Learning (EduLearn), 2025
There are a lot of very interesting scientific concepts to learn in natural and social science. The initial concepts that the student possesses may contradict the actual concepts, which is what causes misconceptions. Misconceptions are identified using misconception detection test tools. In fact, the development of the use of diagnostic test…
Descriptors: Foreign Countries, Test Construction, Diagnostic Tests, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Kentaro Fukushima; Nao Uchida; Kensuke Okada – Journal of Educational and Behavioral Statistics, 2025
Diagnostic tests are typically administered in a multiple-choice (MC) format due to their advantages of objectivity and time efficiency. The MC-deterministic input, noisy "and" gate (DINA) family of models, a representative class of cognitive diagnostic models for MC items, efficiently and parsimoniously estimates the mastery profiles of…
Descriptors: Diagnostic Tests, Cognitive Measurement, Multiple Choice Tests, Educational Assessment
Kala Krishna; Pelin Akyol; Esma Ozer – National Bureau of Economic Research, 2025
Exams are designed to rank students objectively by their abilities, including elements such as time limits, the number and difficulty of questions, and negative marking policies. Using data from a lab-in-field experiment, we develop and estimate a model of student behavior in multiple-choice exams that incorporates the effects of time constraints…
Descriptors: Multiple Choice Tests, Student Behavior, Response Style (Tests), Time
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Aiman Mohammad Freihat; Omar Saleh Bani Yassin – Educational Process: International Journal, 2025
Background/purpose: This study aimed to reveal the accuracy of estimation of multiple-choice test items parameters following the models of the item-response theory in measurement. Materials/methods: The researchers depended on the measurement accuracy indicators, which express the absolute difference between the estimated and actual values of the…
Descriptors: Accuracy, Computation, Multiple Choice Tests, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
David Hope; David Kluth; Matthew Homer; Avril Dewar; Rikki Goddard-Fuller; Alan Jaap; Helen Cameron – Advances in Health Sciences Education, 2025
Rasch modelling is a powerful tool for evaluating item performance, measuring drift in difficulty over time, and comparing students who sat assessments at different times or at different sites. Here, we use data from thirty UK medical schools to describe the benefits of Rasch modelling in quality assurance and the barriers to using it. Sixty…
Descriptors: Item Response Theory, Medical Schools, Foreign Countries, Quality Assurance
Peer reviewed Peer reviewed
Direct linkDirect link
Kashinath Boral; Krishna Kanta Mondal – Journal of Educational Technology Systems, 2025
This study evaluates the performance of three leading AI chatbots--OpenAI's ChatGPT, Google's Gemini, and Microsoft Bing Copilot--in answering multiple choice questions (MCQs) from the UGC-NET Education paper. Using 150 randomly selected questions from examination cycles between June 2019 and December 2023, the chatbots' accuracy was assessed…
Descriptors: Artificial Intelligence, Technology Uses in Education, Multiple Choice Tests, Program Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Brent A. Stevenor; Nadine LeBarron McBride; Charles Anyanwu – Journal of Applied Testing Technology, 2025
Enemy items are two test items that should not be presented to a candidate on the same test. Identifying enemies is essential for personnel assessment, as they weaken the measurement precision and validity of a test. In this research, we examined the effectiveness of lexical and semantic natural language processing techniques for identifying enemy…
Descriptors: Test Items, Natural Language Processing, Occupational Tests, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Jessica D. Kotik; Merilee McCurdy; Christopher H. Skinner; Chelsea S. Wilson; Valerie E. Hogan-Sandi; Catherine Smith; Brian E. Wilhoit – Psychology in the Schools, 2025
Inclusive Postsecondary Education programs are designed to provide students with disabilities the opportunity to seek postsecondary education while receiving explicit instruction and support in social and emotional, independent living, and vocational skills. While students enroll in college classes, there is little research available for how to…
Descriptors: Students with Disabilities, Intellectual Disability, Developmental Disabilities, Intervention
Victoria Crisp; Sylvia Vitello; Abdullah Ali Khan; Heather Mahy; Sarah Hughes – Research Matters, 2025
This research set out to enhance our understanding of the exam techniques and types of written annotations or markings that learners may wish to use to support their thinking when taking digital multiple-choice exams. Additionally, we aimed to further explore issues around the factors that contribute to learners writing less rough work and…
Descriptors: Computer Assisted Testing, Test Format, Multiple Choice Tests, Notetaking
Peer reviewed Peer reviewed
Direct linkDirect link
Christian Myles; Laura Gorman; James F. X. Jones – Anatomical Sciences Education, 2025
Textbook anatomy depiction of the hepatobiliary tree is present in 55%-62% of the population. Misidentification of hepatobiliary variants can lead to bile duct injuries in cholecystectomies. A better understanding of variants has been cited as a key area for improvement in anatomy education. The aim of this study was to compare the effectiveness…
Descriptors: Computer Peripherals, Printing, Science Instruction, Teaching Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Anupkumar D. Dhanvijay; Amita Kumari; Mohammed Jaffer Pinjar; Anita Kumari; Abhimanyu Ganguly; Ankita Priya; Ayesha Juhi; Pratima Gupta; Himel Mondal – Advances in Physiology Education, 2025
Multiple-choice questions (MCQs) are widely used for assessment in medical education. While human-generated MCQs benefit from pedagogical insight, creating high-quality items is time intensive. With the advent of artificial intelligence (AI), tools like DeepSeek R1 offer potential for automated MCQ generation, though their educational validity…
Descriptors: Multiple Choice Tests, Physiology, Artificial Intelligence, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Andreea Dutulescu; Stefan Ruseti; Denis Iorga; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2025
Automated multiple-choice question (MCQ) generation is valuable for scalable assessment and enhanced learning experiences. How-ever, existing MCQ generation methods face challenges in ensuring plausible distractors and maintaining answer consistency. This paper intro-duces a method for MCQ generation that integrates reasoning-based explanations…
Descriptors: Automation, Computer Assisted Testing, Multiple Choice Tests, Natural Language Processing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Laura S. Kabiri; Thomas M. McCabe; Catherine R. Barber; Augusto X. Rodriguez – HAPS Educator, 2025
Student-centered teaching techniques and equitable assessments are crucial in high-quality education. Asking students to write multiple-choice questions (MCQs) over course content and including them on exams may address both concurrently. Much of the work surveying students' responses to this practice involved graduate level education in…
Descriptors: Learner Engagement, Student Satisfaction, Student Developed Materials, Multiple Choice Tests
Alan D. Koenig; John J. Lee; Eric Savitsky; Gabriele Nataneli; Karson Lindstrom; David L. Schriger; Tyler Savitsky – National Center for Research on Evaluation, Standards, and Student Testing (CRESST), 2025
The urgent need to train military and civilian responders in combat casualty care during large-scale operations presents challenges due to the variability of learner preparedness and the resource demands of traditional curriculum development. This study examines the application of generative artificial intelligence (AI) in authoring and evaluating…
Descriptors: Artificial Intelligence, Computer Uses in Education, Medical Education, Multiple Choice Tests
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6