NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 6 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Valentina Albano; Donatella Firmani; Luigi Laura; Jerin George Mathew; Anna Lucia Paoletti; Irene Torrente – Journal of Learning Analytics, 2023
Multiple-choice questions (MCQs) are widely used in educational assessments and professional certification exams. Managing large repositories of MCQs, however, poses several challenges due to the high volume of questions and the need to maintain their quality and relevance over time. One of these challenges is the presence of questions that…
Descriptors: Natural Language Processing, Multiple Choice Tests, Test Items, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
C. H., Dhawaleswar Rao; Saha, Sujan Kumar – IEEE Transactions on Learning Technologies, 2023
Multiple-choice question (MCQ) plays a significant role in educational assessment. Automatic MCQ generation has been an active research area for years, and many systems have been developed for MCQ generation. Still, we could not find any system that generates accurate MCQs from school-level textbook contents that are useful in real examinations.…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Automation, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lee, Abby Deng-Huei – English Language Teaching, 2018
To evaluate the sensitivity of multiple-choice cloze (MCC) tests that use different types of items--syntactic, semantic, and connective--to assess reading ability, 170 English as a foreign language (EFL) students in a vocational college in Taiwan were recruited. The students were divided into two groups (level A and level B) based on their scores…
Descriptors: Foreign Countries, Multiple Choice Tests, Cloze Procedure, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Löwenadler, John – Language Testing, 2019
This study aims to investigate patterns of variation in the interplay of L2 language ability and general reading comprehension skills in L2 reading, by comparing item-level effects of test-takers' results on L1 and L2 reading comprehension tests. The material comes from more than 500,000 people tested on L1 (Swedish) and L2 (English) in the…
Descriptors: Swedish, English (Second Language), Second Language Learning, Second Language Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Cawthon, Stephanie – American Annals of the Deaf, 2011
Linguistic complexity of test items is one test format element that has been studied in the context of struggling readers and their participation in paper-and-pencil tests. The present article presents findings from an exploratory study on the potential relationship between linguistic complexity and test performance for deaf readers. A total of 64…
Descriptors: Language Styles, Test Content, Syntax, Linguistics
Green, Kathy E. – 1983
The purpose of this study was to determine whether item difficulty is significantly affected by language difficulty and response set convergence. Language difficulty was varied by increasing sentence (stem) length, increasing syntactic complexity, and substituting uncommon words for more familiar terms in the item stem. Item wording ranged from…
Descriptors: Difficulty Level, Foreign Countries, Higher Education, Item Analysis