NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Does not meet standards1
Showing 1 to 15 of 977 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Séverin Lions; María Paz Blanco; Pablo Dartnell; Carlos Monsalve; Gabriel Ortega; Julie Lemarié – Applied Measurement in Education, 2024
Multiple-choice items are universally used in formal education. Since they should assess learning, not test-wiseness or guesswork, they must be constructed following the highest possible standards. Hundreds of item-writing guides have provided guidelines to help test developers adopt appropriate strategies to define the distribution and sequence…
Descriptors: Test Construction, Multiple Choice Tests, Guidelines, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rita Arfi Astuti Ningroom; Sri Yamtinah; Riyadi – Journal of Education and Learning (EduLearn), 2025
There are a lot of very interesting scientific concepts to learn in natural and social science. The initial concepts that the student possesses may contradict the actual concepts, which is what causes misconceptions. Misconceptions are identified using misconception detection test tools. In fact, the development of the use of diagnostic test…
Descriptors: Foreign Countries, Test Construction, Diagnostic Tests, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Kashinath Boral; Krishna Kanta Mondal – Journal of Educational Technology Systems, 2025
This study evaluates the performance of three leading AI chatbots--OpenAI's ChatGPT, Google's Gemini, and Microsoft Bing Copilot--in answering multiple choice questions (MCQs) from the UGC-NET Education paper. Using 150 randomly selected questions from examination cycles between June 2019 and December 2023, the chatbots' accuracy was assessed…
Descriptors: Artificial Intelligence, Technology Uses in Education, Multiple Choice Tests, Program Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Brent A. Stevenor; Nadine LeBarron McBride; Charles Anyanwu – Journal of Applied Testing Technology, 2025
Enemy items are two test items that should not be presented to a candidate on the same test. Identifying enemies is essential for personnel assessment, as they weaken the measurement precision and validity of a test. In this research, we examined the effectiveness of lexical and semantic natural language processing techniques for identifying enemy…
Descriptors: Test Items, Natural Language Processing, Occupational Tests, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Anupkumar D. Dhanvijay; Amita Kumari; Mohammed Jaffer Pinjar; Anita Kumari; Abhimanyu Ganguly; Ankita Priya; Ayesha Juhi; Pratima Gupta; Himel Mondal – Advances in Physiology Education, 2025
Multiple-choice questions (MCQs) are widely used for assessment in medical education. While human-generated MCQs benefit from pedagogical insight, creating high-quality items is time intensive. With the advent of artificial intelligence (AI), tools like DeepSeek R1 offer potential for automated MCQ generation, though their educational validity…
Descriptors: Multiple Choice Tests, Physiology, Artificial Intelligence, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Archana Praveen Kumar; Ashalatha Nayak; Manjula Shenoy K.; Chaitanya; Kaustav Ghosh – International Journal of Artificial Intelligence in Education, 2024
Multiple Choice Questions (MCQs) are a popular assessment method because they enable automated evaluation, flexible administration and use with huge groups. Despite these benefits, the manual construction of MCQs is challenging, time-consuming and error-prone. This is because each MCQ is comprised of a question called the "stem", a…
Descriptors: Multiple Choice Tests, Test Construction, Test Items, Semantics
Peer reviewed Peer reviewed
Direct linkDirect link
Helen Zhang; Anthony Perry; Irene Lee – International Journal of Artificial Intelligence in Education, 2025
The rapid expansion of Artificial Intelligence (AI) in our society makes it urgent and necessary to develop young students' AI literacy so that they can become informed citizens and critical consumers of AI technology. Over the past decade many efforts have focused on developing curricular materials that make AI concepts accessible and engaging to…
Descriptors: Test Construction, Test Validity, Measures (Individuals), Artificial Intelligence
Peer reviewed Peer reviewed
Direct linkDirect link
Rekha; Shakeela K. – Journal on School Educational Technology, 2025
The main objective of the present study was to construct and standardize an achievement test in science for the secondary school science students in grade 8. An achievement test having 120 test items was prepared by the facilitator based on the four main learning objectives of teaching science that are knowledge, understanding, application, and…
Descriptors: Test Construction, Standardized Tests, Secondary School Students, Science Achievement
Peer reviewed Peer reviewed
Direct linkDirect link
Semere Kiros Bitew; Amir Hadifar; Lucas Sterckx; Johannes Deleu; Chris Develder; Thomas Demeester – IEEE Transactions on Learning Technologies, 2024
Multiple-choice questions (MCQs) are widely used in digital learning systems, as they allow for automating the assessment process. However, owing to the increased digital literacy of students and the advent of social media platforms, MCQ tests are widely shared online, and teachers are continuously challenged to create new questions, which is an…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Test Construction, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Karen Singer-Freeman – Assessment Update, 2024
A common feature of many assessment plans is the use of multiple-choice questions. Although there are criticisms of multiple-choice questions, this assessment format is here to stay--multiple-choice questions are effective means of evaluation in large classes, central to many licensing and entry exams, used in most adaptive learning platforms, and…
Descriptors: Test Construction, Multiple Choice Tests, Student Evaluation, Learning Processes
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Illene, Sherlin; Feranie, Selly; Siahaan, Parsaoran – Journal of Education and Learning (EduLearn), 2023
Currently, we are facing the rapid development of information and communication technology (ICT) that characterizes the 21st century. Challenges, problems, life, and careers in the 21st century can be successfully faced if we master the 21st century skills. However, nowadays it is still very rare to develop a 21st century skill measurement model,…
Descriptors: Multiple Choice Tests, 21st Century Skills, Heat, Test Construction
Olney, Andrew M. – Grantee Submission, 2022
Multi-angle question answering models have recently been proposed that promise to perform related tasks like question generation. However, performance on related tasks has not been thoroughly studied. We investigate a leading model called Macaw on the task of multiple choice question generation and evaluate its performance on three angles that…
Descriptors: Test Construction, Multiple Choice Tests, Test Items, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Dominik Šmida; Elena Cipková; Michael Fuchs – International Journal of Science Education, 2024
Teachers can develop science literacy among pupils through Inquiry-Based Science Education. IBSE helps to evolve the inquiry skills necessary for understanding the world of science and nature. To be able to improve the level of inquiry skills systematically, we need to know the current level among pupils. For this reason, we designed a…
Descriptors: Foreign Countries, High School Students, Middle School Students, Science Education
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Maristela Petrovic-Dzerdz – Collected Essays on Learning and Teaching, 2024
Large introductory classes, with their expansive curriculum, demand assessment strategies that blend efficiency with reliability, prompting the consideration of multiple-choice (MC) tests as a viable option. Crafting a high-quality MC test, however, necessitates a meticulous process involving reflection on assessment format appropriateness, test…
Descriptors: Multiple Choice Tests, Test Construction, Test Items, Alignment (Education)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kübra Karakaya Özyer – International Journal of Technology in Education and Science, 2025
This study aims to delve into the process and perceptions of pre-service teachers as they engage in generating multiple-choice questions with the assistance of generative AI tools. Adopting a single-case study design, the research involved the participation of 35 pre-service teachers. The participants were tasked with utilizing generative AI tools…
Descriptors: Preservice Teachers, Preservice Teacher Education, Artificial Intelligence, Multiple Choice Tests
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  66