NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing all 9 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Laura Kuusemets; Kristin Parve; Kati Ain; Tiina Kraav – International Journal of Education in Mathematics, Science and Technology, 2024
Using multiple-choice questions as learning and assessment tools is standard at all levels of education. However, when discussing the positive and negative aspects of their use, the time and complexity involved in producing plausible distractor options emerge as a disadvantage that offsets the time savings in relation to feedback. The article…
Descriptors: Program Evaluation, Artificial Intelligence, Computer Assisted Testing, Man Machine Systems
Peer reviewed Peer reviewed
Direct linkDirect link
Kunal Sareen – Innovations in Education and Teaching International, 2024
This study examines the proficiency of Chat GPT, an AI language model, in answering questions on the Situational Judgement Test (SJT), a widely used assessment tool for evaluating the fundamental competencies of medical graduates in the UK. A total of 252 SJT questions from the "Oxford Assess and Progress: Situational Judgement" Test…
Descriptors: Ethics, Decision Making, Artificial Intelligence, Computer Software
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Fadillah, Sarah Meilani; Ha, Minsu; Nuraeni, Eni; Indriyanti, Nurma Yunita – Malaysian Journal of Learning and Instruction, 2023
Purpose: Researchers discovered that when students were given the opportunity to change their answers, a majority changed their responses from incorrect to correct, and this change often increased the overall test results. What prompts students to modify their answers? This study aims to examine the modification of scientific reasoning test, with…
Descriptors: Science Tests, Multiple Choice Tests, Test Items, Decision Making
Peer reviewed Peer reviewed
Direct linkDirect link
Çinar, Ayse; Ince, Elif; Gezer, Murat; Yilmaz, Özgür – Education and Information Technologies, 2020
Worldwide, open-ended questions that require short answers have been used in many exams in fields of science, such as the International Student Assessment Program (PISA), the International Science and Maths Trends Research (TIMSS). However, multiple-choice questions are used for many exams at the national level in Turkey, especially high school…
Descriptors: Foreign Countries, Computer Assisted Testing, Artificial Intelligence, Grading
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Qiao Wang; Ralph L. Rose; Ayaka Sugawara; Naho Orita – Vocabulary Learning and Instruction, 2025
VocQGen is an automated tool designed to generate multiple-choice cloze (MCC) questions for vocabulary assessment in second language learning contexts. It leverages several natural language processing (NLP) tools and OpenAI's GPT-4 model to produce MCC items quickly from user-specified word lists. To evaluate its effectiveness, we used the first…
Descriptors: Vocabulary Skills, Artificial Intelligence, Computer Software, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Aryadoust, Vahid – Computer Assisted Language Learning, 2020
The aim of the present study is two-fold. First, it uses eye-tracking to investigate the dynamics of item reading, both in multiple choice and matching items, before and during two hearings of listening passages in a computerized while-listening performance (WLP) test. Second, it investigates answer changing during the two hearings, which include…
Descriptors: Eye Movements, Test Items, Secondary School Students, Reading Processes
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kim, Kerry J.; Meir, Eli; Pope, Denise S.; Wendel, Daniel – Journal of Educational Data Mining, 2017
Computerized classification of student answers offers the possibility of instant feedback and improved learning. Open response (OR) questions provide greater insight into student thinking and understanding than more constrained multiple choice (MC) questions, but development of automated classifiers is more difficult, often requiring training a…
Descriptors: Classification, Computer Assisted Testing, Multiple Choice Tests, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Çetinavci, Ugur Recep – Journal of Language and Linguistic Studies, 2018
This study aims to investigate how Turkish Teacher Trainees of English as a Foreign Language (EFL) interpret implied meanings, which is an integral but lesser-studied component of pragmatic competence. The conduct of the study began with the development of an online multiple-choice discourse completion test (MDCT) through two piloting phases.…
Descriptors: Accuracy, English (Second Language), Second Language Learning, Second Language Instruction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rybanov, Alexander Aleksandrovich – Turkish Online Journal of Distance Education, 2013
Is offered the set of criteria for assessing efficiency of the process forming the answers to multiple-choice test items. To increase accuracy of computer-assisted testing results, it is suggested to assess dynamics of the process of forming the final answer using the following factors: loss of time factor and correct choice factor. The model…
Descriptors: Evaluation Criteria, Efficiency, Multiple Choice Tests, Test Items