Publication Date
| In 2026 | 0 |
| Since 2025 | 2 |
| Since 2022 (last 5 years) | 10 |
| Since 2017 (last 10 years) | 28 |
| Since 2007 (last 20 years) | 49 |
Descriptor
| Multiple Choice Tests | 112 |
| Reading Tests | 112 |
| Test Items | 112 |
| Reading Comprehension | 74 |
| Foreign Countries | 53 |
| Achievement Tests | 43 |
| Student Evaluation | 40 |
| Reading Skills | 38 |
| Educational Assessment | 36 |
| High Schools | 34 |
| English Instruction | 32 |
| More ▼ | |
Source
Author
| Tindal, Gerald | 6 |
| Alonzo, Julie | 5 |
| Liu, Kimy | 3 |
| Cohen, Andrew D. | 2 |
| Katherine S. Binder | 2 |
| Lee, Yoonsun | 2 |
| Park, Bitnara Jasmine | 2 |
| Scott P. Ardoin | 2 |
| Steedle, Jeffrey | 2 |
| Taylor, Catherine S. | 2 |
| Upton, Thomas A. | 2 |
| More ▼ | |
Publication Type
Education Level
| Higher Education | 15 |
| Secondary Education | 13 |
| Postsecondary Education | 12 |
| Elementary Education | 11 |
| Middle Schools | 10 |
| Junior High Schools | 8 |
| Elementary Secondary Education | 6 |
| Grade 4 | 6 |
| Grade 7 | 6 |
| Grade 5 | 5 |
| Grade 3 | 4 |
| More ▼ | |
Audience
| Students | 24 |
| Practitioners | 4 |
| Teachers | 3 |
| Researchers | 1 |
Laws, Policies, & Programs
| No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Bilal Ghanem; Alona Fyshe – International Educational Data Mining Society, 2024
Multiple choice questions (MCQs) are a common way to assess reading comprehension. Every MCQ needs a set of distractor answers that are incorrect, but plausible enough to test student knowledge. However, good distractors are hard to create. Distractor generation (DG) models have been proposed, and their performance is typically evaluated using…
Descriptors: Multiple Choice Tests, Reading Comprehension, Test Items, Testing
Militsa G. Ivanova; Hanna Eklöf; Michalis P. Michaelides – Journal of Applied Testing Technology, 2025
Digital administration of assessments allows for the collection of process data indices, such as response time, which can serve as indicators of rapid-guessing and examinee test-taking effort. Setting a time threshold is essential to distinguish effortful from effortless behavior using item response times. Threshold identification methods may…
Descriptors: Test Items, Computer Assisted Testing, Reaction Time, Achievement Tests
Arandha May Rachmawati; Agus Widyantoro – English Language Teaching Educational Journal, 2025
This study aims to evaluate the quality of English reading comprehension test instruments used in informal learning, especially as English literacy tests. With a quantitative approach, the analysis was carried out using the Rasch model through the Quest program on 30 multiple-choice questions given to 30 grade IX students from informal educational…
Descriptors: Item Response Theory, Reading Tests, Reading Comprehension, English (Second Language)
Sharareh Sadat Sarsarabi; Zeinab Sazegar – International Journal of Language Testing, 2023
The statement stated in a multiple-choice question can be developed regarding two types of sentences: Interruptive (periodic) and cumulative (or loose). This study deals with different kinds of stems in designing multiple-choice (MC) items. To fill the existing gap in the literature, two groups of teacher students passing general English courses…
Descriptors: Language Tests, Test Format, Multiple Choice Tests, Student Placement
Liao, Ray J. T. – Language Testing, 2023
Among the variety of selected response formats used in L2 reading assessment, multiple-choice (MC) is the most commonly adopted, primarily due to its efficiency and objectiveness. Given the impact of assessment results on teaching and learning, it is necessary to investigate the degree to which the MC format reliably measures learners' L2 reading…
Descriptors: Reading Tests, Language Tests, Second Language Learning, Second Language Instruction
Moradi, Elahe; Ghabanchi, Zargham; Pishghadam, Reza – Language Testing in Asia, 2022
Given the significance of the test fairness, this study aimed to investigate a reading comprehension test for evidence of differential item functioning (DIF) based on English as a Foreign Language (EFL) learners' gender and their mode of learning (conventional vs. distance learning). To this end, 514 EFL learners were asked to take a 30-item…
Descriptors: Reading Comprehension, Test Bias, Test Items, Second Language Learning
Shear, Benjamin R. – Journal of Educational Measurement, 2023
Large-scale standardized tests are regularly used to measure student achievement overall and for student subgroups. These uses assume tests provide comparable measures of outcomes across student subgroups, but prior research suggests score comparisons across gender groups may be complicated by the type of test items used. This paper presents…
Descriptors: Gender Bias, Item Analysis, Test Items, Achievement Tests
Budi Waluyo; Ali Zahabi; Luksika Ruangsung – rEFLections, 2024
The increasing popularity of the Common European Framework of Reference (CEFR) in non-native English-speaking countries has generated a demand for concrete examples in the creation of CEFR-based tests that assess the four main English skills. In response, this research endeavors to provide insight into the development and validation of a…
Descriptors: Language Tests, Language Proficiency, Undergraduate Students, Language Skills
Corrin Moss; Scott P. Ardoin; Joshua A. Mellott; Katherine S. Binder – Grantee Submission, 2023
The current study investigated the impact of manipulating reading strategy, reading the questions first (QF) or the passage first (PF), during a reading comprehension test, and we explored how reading strategy was related to student characteristics. Participants' eye movements were monitored as they read 12 passages and answered multiple-choice…
Descriptors: Reading Processes, Accuracy, Grade 8, Reading Tests
Becker, Anthony; Nekrasova-Beker, Tatiana – Educational Assessment, 2018
While previous research has identified numerous factors that contribute to item difficulty, studies involving large-scale reading tests have provided mixed results. This study examined five selected-response item types used to measure reading comprehension in the Pearson Test of English Academic: a) multiple-choice (choose one answer), b)…
Descriptors: Reading Comprehension, Test Items, Reading Tests, Test Format
Wang, Shichao; Li, Dongmei; Steedle, Jeffrey – ACT, Inc., 2021
Speeded tests set time limits so that few examinees can reach all items, and power tests allow most test-takers sufficient time to attempt all items. Educational achievement tests are sometimes described as "timed power tests" because the amount of time provided is intended to allow nearly all students to complete the test, yet this…
Descriptors: Timed Tests, Test Items, Achievement Tests, Testing
Reed, Deborah K.; Stevenson, Nathan; LeBeau, Brandon C. – Elementary School Journal, 2019
This study investigated the effects of imposing task- or process-oriented reading behaviors on reading comprehension assessment performance. Students in grades 5-8 (N = 275) were randomly assigned to hear multiple-choice items read aloud before or after reading a test passage and when they were and were not allowed access to the passage while…
Descriptors: Reading Comprehension, Reading Tests, Multiple Choice Tests, Reading Aloud to Others
Özdemir, Ezgi Çetinkaya; Akyol, Hayati – Universal Journal of Educational Research, 2019
Reading comprehension has an important place in lifelong learning. It is an interactive process between the reader and the text. Students need reading comprehension skills at all educational levels and for all school subjects. Determining the level of students' reading comprehension skills is the subject of testing and evaluation. Tests used to…
Descriptors: Reading Comprehension, Reading Tests, Test Construction, Grade 4
Ben Seipel; Sarah E. Carlson; Virginia Clinton-Lisell; Mark L. Davison; Patrick C. Kennedy – Grantee Submission, 2022
Originally designed for students in Grades 3 through 5, MOCCA (formerly the Multiple-choice Online Causal Comprehension Assessment), identifies students who struggle with comprehension, and helps uncover why they struggle. There are many reasons why students might not comprehend what they read. They may struggle with decoding, or reading words…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Diagnostic Tests, Reading Tests
Hildenbrand, Lena; Wiley, Jennifer – Grantee Submission, 2021
Many studies have demonstrated that testing students on to-be-learned materials can be an effective learning activity. However, past studies have also shown that some practice test formats are more effective than others. Open-ended recall or short answer practice tests may be effective because the questions prompt deeper processing as students…
Descriptors: Test Format, Outcomes of Education, Cognitive Processes, Learning Activities

Peer reviewed
Direct link
