NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing 1 to 15 of 16 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Brent A. Stevenor; Nadine LeBarron McBride; Charles Anyanwu – Journal of Applied Testing Technology, 2025
Enemy items are two test items that should not be presented to a candidate on the same test. Identifying enemies is essential for personnel assessment, as they weaken the measurement precision and validity of a test. In this research, we examined the effectiveness of lexical and semantic natural language processing techniques for identifying enemy…
Descriptors: Test Items, Natural Language Processing, Occupational Tests, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Archana Praveen Kumar; Ashalatha Nayak; Manjula Shenoy K.; Chaitanya; Kaustav Ghosh – International Journal of Artificial Intelligence in Education, 2024
Multiple Choice Questions (MCQs) are a popular assessment method because they enable automated evaluation, flexible administration and use with huge groups. Despite these benefits, the manual construction of MCQs is challenging, time-consuming and error-prone. This is because each MCQ is comprised of a question called the "stem", a…
Descriptors: Multiple Choice Tests, Test Construction, Test Items, Semantics
Peer reviewed Peer reviewed
Direct linkDirect link
C. H., Dhawaleswar Rao; Saha, Sujan Kumar – IEEE Transactions on Learning Technologies, 2023
Multiple-choice question (MCQ) plays a significant role in educational assessment. Automatic MCQ generation has been an active research area for years, and many systems have been developed for MCQ generation. Still, we could not find any system that generates accurate MCQs from school-level textbook contents that are useful in real examinations.…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Automation, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Patra, Rakesh; Saha, Sujan Kumar – Education and Information Technologies, 2019
Assessment plays an important role in learning and Multiple Choice Questions (MCQs) are quite popular in large-scale evaluations. Technology-enabled learning necessitates a smart assessment. Therefore, automatic MCQ generation became increasingly popular in the last two decades. Despite a large amount of research effort, system generated MCQs are…
Descriptors: Multiple Choice Tests, High Stakes Tests, Semantics, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Lishan; VanLehn, Kurt – Interactive Learning Environments, 2021
Despite their drawback, multiple-choice questions are an enduring feature in instruction because they can be answered more rapidly than open response questions and they are easily scored. However, it can be difficult to generate good incorrect choices (called "distractors"). We designed an algorithm to generate distractors from a…
Descriptors: Semantics, Networks, Multiple Choice Tests, Teaching Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lee, Abby Deng-Huei – English Language Teaching, 2018
To evaluate the sensitivity of multiple-choice cloze (MCC) tests that use different types of items--syntactic, semantic, and connective--to assess reading ability, 170 English as a foreign language (EFL) students in a vocational college in Taiwan were recruited. The students were divided into two groups (level A and level B) based on their scores…
Descriptors: Foreign Countries, Multiple Choice Tests, Cloze Procedure, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Papasalouros, Andreas; Chatzigiannakou, Maria – International Association for Development of the Information Society, 2018
Automating the production of questions for assessment and self-assessment has become recently an active field of study. The use of Semantic Web technologies has certain advantages over other methods for question generation and thus is one of the most important lines of research for this problem. The aim of this paper is to provide an overview of…
Descriptors: Computer Assisted Testing, Web 2.0 Technologies, Test Format, Multiple Choice Tests
Susanti, Yuni; Tokunaga, Takenobu; Nishikawa, Hitoshi; Obari, Hiroyuki – Research and Practice in Technology Enhanced Learning, 2017
The present study investigates the best factor for controlling the item difficulty of multiple-choice English vocabulary questions generated by an automatic question generation system. Three factors are considered for controlling item difficulty: (1) reading passage difficulty, (2) semantic similarity between the correct answer and distractors,…
Descriptors: Test Items, Difficulty Level, Computer Assisted Testing, Vocabulary Development
Peer reviewed Peer reviewed
Direct linkDirect link
Sun, Bo; Zhu, Yunzong; Xiao, Yongkang; Xiao, Rong; Wei, Yungang – IEEE Transactions on Learning Technologies, 2019
In recent years, computerized adaptive testing (CAT) has gained popularity as an important means to evaluate students' ability. Assigning tags to test questions is crucial in CAT. Manual tagging is widely used for constructing question banks; however, this approach is time-consuming and might lead to consistency issues. Automatic question tagging,…
Descriptors: Computer Assisted Testing, Student Evaluation, Test Items, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Löwenadler, John – Language Testing, 2019
This study aims to investigate patterns of variation in the interplay of L2 language ability and general reading comprehension skills in L2 reading, by comparing item-level effects of test-takers' results on L1 and L2 reading comprehension tests. The material comes from more than 500,000 people tested on L1 (Swedish) and L2 (English) in the…
Descriptors: Swedish, English (Second Language), Second Language Learning, Second Language Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Ming; Rus, Vasile; Liu, Li – IEEE Transactions on Learning Technologies, 2018
Automatic question generation can help teachers to save the time necessary for constructing examination papers. Several approaches were proposed to automatically generate multiple-choice questions for vocabulary assessment or grammar exercises. However, most of these studies focused on generating questions in English with a certain similarity…
Descriptors: Multiple Choice Tests, Regression (Statistics), Test Items, Natural Language Processing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sharma Mittal, Ruhi; Nagar, Seema; Sharma, Mourvi; Dwivedi, Utkarsh; Dey, Prasenjit; Kokku, Ravi – International Educational Data Mining Society, 2018
As education gets increasingly digitized, and intelligent tutoring systems gain commercial prominence, scalable assessment generation mechanisms become a critical requirement for enabling increased learning outcomes. Assessments provide a way to measure learners' level of understanding and difficulty, and personalize their learning. There have…
Descriptors: Vocabulary Development, Language Tests, Semantics, Associative Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Aldabe, Itziar; Maritxalar, Montse – IEEE Transactions on Learning Technologies, 2014
The work we present in this paper aims to help teachers create multiple-choice science tests. We focus on a scientific vocabulary-learning scenario taking place in a Basque-language educational environment. In this particular scenario, we explore the option of automatically generating Multiple-Choice Questions (MCQ) by means of Natural Language…
Descriptors: Science Tests, Test Construction, Computer Assisted Testing, Multiple Choice Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Deane, Paul; Lawless, René R.; Li, Chen; Sabatini, John; Bejar, Isaac I.; O'Reilly, Tenaha – ETS Research Report Series, 2014
We expect that word knowledge accumulates gradually. This article draws on earlier approaches to assessing depth, but focuses on one dimension: richness of semantic knowledge. We present results from a study in which three distinct item types were developed at three levels of depth: knowledge of common usage patterns, knowledge of broad topical…
Descriptors: Vocabulary, Test Items, Language Tests, Semantics
Peer reviewed Peer reviewed
Marshall, Thomas E.; And Others – Journal of Educational Technology Systems, 1996
Examines the strategies used in answering a computerized multiple-choice test where all questions on a semantic topic were grouped together or randomly distributed. Findings indicate that students grouped by performance on the test used different strategies in completing the test due to distinct cognitive processes between the groups. (AEF)
Descriptors: Academic Achievement, Cognitive Processes, Computer Assisted Testing, Higher Education
Previous Page | Next Page »
Pages: 1  |  2