Publication Date
| In 2026 | 0 |
| Since 2025 | 1 |
| Since 2022 (last 5 years) | 4 |
| Since 2017 (last 10 years) | 8 |
| Since 2007 (last 20 years) | 9 |
Descriptor
| Accuracy | 9 |
| Computer Assisted Testing | 9 |
| Multiple Choice Tests | 9 |
| Test Items | 6 |
| Foreign Countries | 5 |
| Artificial Intelligence | 4 |
| Computer Software | 3 |
| Science Tests | 3 |
| Test Format | 3 |
| Classification | 2 |
| College Students | 2 |
| More ▼ | |
Source
Author
| Aryadoust, Vahid | 1 |
| Ayaka Sugawara | 1 |
| Fadillah, Sarah Meilani | 1 |
| Gezer, Murat | 1 |
| Ha, Minsu | 1 |
| Ince, Elif | 1 |
| Indriyanti, Nurma Yunita | 1 |
| Kati Ain | 1 |
| Kim, Kerry J. | 1 |
| Kristin Parve | 1 |
| Kunal Sareen | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 9 |
| Reports - Research | 8 |
| Reports - Evaluative | 1 |
Education Level
| Higher Education | 5 |
| Postsecondary Education | 4 |
| Secondary Education | 1 |
Audience
Location
| Turkey | 2 |
| Indonesia | 1 |
| Japan | 1 |
| United Kingdom | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Laura Kuusemets; Kristin Parve; Kati Ain; Tiina Kraav – International Journal of Education in Mathematics, Science and Technology, 2024
Using multiple-choice questions as learning and assessment tools is standard at all levels of education. However, when discussing the positive and negative aspects of their use, the time and complexity involved in producing plausible distractor options emerge as a disadvantage that offsets the time savings in relation to feedback. The article…
Descriptors: Program Evaluation, Artificial Intelligence, Computer Assisted Testing, Man Machine Systems
Kunal Sareen – Innovations in Education and Teaching International, 2024
This study examines the proficiency of Chat GPT, an AI language model, in answering questions on the Situational Judgement Test (SJT), a widely used assessment tool for evaluating the fundamental competencies of medical graduates in the UK. A total of 252 SJT questions from the "Oxford Assess and Progress: Situational Judgement" Test…
Descriptors: Ethics, Decision Making, Artificial Intelligence, Computer Software
Fadillah, Sarah Meilani; Ha, Minsu; Nuraeni, Eni; Indriyanti, Nurma Yunita – Malaysian Journal of Learning and Instruction, 2023
Purpose: Researchers discovered that when students were given the opportunity to change their answers, a majority changed their responses from incorrect to correct, and this change often increased the overall test results. What prompts students to modify their answers? This study aims to examine the modification of scientific reasoning test, with…
Descriptors: Science Tests, Multiple Choice Tests, Test Items, Decision Making
Çinar, Ayse; Ince, Elif; Gezer, Murat; Yilmaz, Özgür – Education and Information Technologies, 2020
Worldwide, open-ended questions that require short answers have been used in many exams in fields of science, such as the International Student Assessment Program (PISA), the International Science and Maths Trends Research (TIMSS). However, multiple-choice questions are used for many exams at the national level in Turkey, especially high school…
Descriptors: Foreign Countries, Computer Assisted Testing, Artificial Intelligence, Grading
Qiao Wang; Ralph L. Rose; Ayaka Sugawara; Naho Orita – Vocabulary Learning and Instruction, 2025
VocQGen is an automated tool designed to generate multiple-choice cloze (MCC) questions for vocabulary assessment in second language learning contexts. It leverages several natural language processing (NLP) tools and OpenAI's GPT-4 model to produce MCC items quickly from user-specified word lists. To evaluate its effectiveness, we used the first…
Descriptors: Vocabulary Skills, Artificial Intelligence, Computer Software, Multiple Choice Tests
Aryadoust, Vahid – Computer Assisted Language Learning, 2020
The aim of the present study is two-fold. First, it uses eye-tracking to investigate the dynamics of item reading, both in multiple choice and matching items, before and during two hearings of listening passages in a computerized while-listening performance (WLP) test. Second, it investigates answer changing during the two hearings, which include…
Descriptors: Eye Movements, Test Items, Secondary School Students, Reading Processes
Kim, Kerry J.; Meir, Eli; Pope, Denise S.; Wendel, Daniel – Journal of Educational Data Mining, 2017
Computerized classification of student answers offers the possibility of instant feedback and improved learning. Open response (OR) questions provide greater insight into student thinking and understanding than more constrained multiple choice (MC) questions, but development of automated classifiers is more difficult, often requiring training a…
Descriptors: Classification, Computer Assisted Testing, Multiple Choice Tests, Test Format
Çetinavci, Ugur Recep – Journal of Language and Linguistic Studies, 2018
This study aims to investigate how Turkish Teacher Trainees of English as a Foreign Language (EFL) interpret implied meanings, which is an integral but lesser-studied component of pragmatic competence. The conduct of the study began with the development of an online multiple-choice discourse completion test (MDCT) through two piloting phases.…
Descriptors: Accuracy, English (Second Language), Second Language Learning, Second Language Instruction
Rybanov, Alexander Aleksandrovich – Turkish Online Journal of Distance Education, 2013
Is offered the set of criteria for assessing efficiency of the process forming the answers to multiple-choice test items. To increase accuracy of computer-assisted testing results, it is suggested to assess dynamics of the process of forming the final answer using the following factors: loss of time factor and correct choice factor. The model…
Descriptors: Evaluation Criteria, Efficiency, Multiple Choice Tests, Test Items

Peer reviewed
Direct link
