Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 10 |
Since 2006 (last 20 years) | 15 |
Descriptor
Difficulty Level | 26 |
Reading Tests | 26 |
Test Format | 26 |
Test Items | 22 |
Reading Comprehension | 13 |
Language Tests | 10 |
Comparative Analysis | 9 |
Item Analysis | 9 |
Multiple Choice Tests | 9 |
English (Second Language) | 8 |
Second Language Learning | 8 |
More ▼ |
Source
Author
Turhan, Ahmet | 2 |
Agnieszka Slezak-Swiat | 1 |
Baghaei, Purya | 1 |
Baker, Scott K. | 1 |
Basaraba, Deni | 1 |
Becker, Anthony | 1 |
Binici, Salih | 1 |
Bolden, Bernadine J. | 1 |
Cawthon, Stephanie | 1 |
Chaparro, Erin A. | 1 |
Chapman, Mark | 1 |
More ▼ |
Publication Type
Education Level
Elementary Education | 4 |
Elementary Secondary Education | 3 |
Grade 4 | 2 |
Higher Education | 2 |
Intermediate Grades | 2 |
Postsecondary Education | 2 |
Secondary Education | 2 |
Early Childhood Education | 1 |
Grade 2 | 1 |
Grade 5 | 1 |
Grade 6 | 1 |
More ▼ |
Audience
Researchers | 1 |
Location
Netherlands | 2 |
Indonesia | 1 |
Iran | 1 |
Poland | 1 |
Sweden | 1 |
United Kingdom (England) | 1 |
United Kingdom (Northern… | 1 |
United Kingdom (Wales) | 1 |
Virginia | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Monika Grotek; Agnieszka Slezak-Swiat – Reading in a Foreign Language, 2024
The study investigates the effect of the perception of text and task difficulty on adults' performance in reading tests in L1 and L2. The relationship between the following variables is studied: (a) readers' perception of text and task difficulty in L1 and L2 measured in a self-reported post-task questionnaire, (b) the number of correct answers to…
Descriptors: Difficulty Level, Second Language Learning, Eye Movements, Task Analysis
Becker, Anthony; Nekrasova-Beker, Tatiana – Educational Assessment, 2018
While previous research has identified numerous factors that contribute to item difficulty, studies involving large-scale reading tests have provided mixed results. This study examined five selected-response item types used to measure reading comprehension in the Pearson Test of English Academic: a) multiple-choice (choose one answer), b)…
Descriptors: Reading Comprehension, Test Items, Reading Tests, Test Format
Ping Wang – ProQuest LLC, 2021
According to the RAND model framework, reading comprehension test performance is influenced by readers' reading skills or reader characteristics, test properties, and their interactions. However, little empirical research has systematically compared the impacts of reader characteristics, test properties, and reader-test interactions across…
Descriptors: Reading Comprehension, Reading Tests, Reading Research, Test Items
Chaparro, Erin A.; Stoolmiller, Mike; Park, Yonghan; Baker, Scott K.; Basaraba, Deni; Fien, Hank; Smith, Jean L. Mercier – Assessment for Effective Intervention, 2018
Progress monitoring has been adopted as an integral part of multi-tiered support systems. Oral reading fluency (ORF) is the most established assessment for progress-monitoring purposes. To generate valid trend lines or slopes, ORF passages must be of equivalent difficulty. Recently, however, evidence indicates that ORF passages are not equivalent,…
Descriptors: Reading Fluency, Reading Tests, Grade 2, Difficulty Level
Steedle, Jeffrey T.; Morrison, Kristin M. – Educational Assessment, 2019
Assessment items are commonly field tested prior to operational use to observe statistical item properties such as difficulty. Item parameter estimates from field testing may be used to assign scores via pre-equating or computer adaptive designs. This study examined differences between item difficulty estimates based on field test and operational…
Descriptors: Field Tests, Test Items, Statistics, Difficulty Level
Kim, Ahyoung Alicia; Tywoniw, Rurik L.; Chapman, Mark – Language Assessment Quarterly, 2022
Technology-enhanced items (TEIs) are innovative, computer-delivered test items that allow test takers to better interact with the test environment compared to traditional multiple-choice items (MCIs). The interactive nature of TEIs offer improved construct coverage compared with MCIs but little research exists regarding students' performance on…
Descriptors: Language Tests, Test Items, Computer Assisted Testing, English (Second Language)
Liao, Linyu – English Language Teaching, 2020
As a high-stakes standardized test, IELTS is expected to have comparable forms of test papers so that test takers from different test administration on different dates receive comparable test scores. Therefore, this study examined the text difficulty and task characteristics of four parallel academic IELTS reading tests to reveal to what extent…
Descriptors: Second Language Learning, English (Second Language), Language Tests, High Stakes Tests
Elfiondri; Kasim, Usman; Mustafa, Faisal; Putra, Tomi Mandala – TESOL International Journal, 2020
Studies have shown that reading comprehension is the most difficult section of the Paper-Based Test (PBT) TOEFL. Therefore, this research aimed to identify which sub-skill in reading comprehension poses the greatest challenges for the students and how this sub-skill correlates to other reading comprehension sub-skills. To achieve this purpose,…
Descriptors: Reading Comprehension, Second Language Learning, Language Tests, English (Second Language)
Baghaei, Purya; Ravand, Hamdollah – SAGE Open, 2019
In many reading comprehension tests, different test formats are employed. Two commonly used test formats to measure reading comprehension are sustained passages followed by some questions and cloze items. Individual differences in handling test format peculiarities could constitute a source of score variance. In this study, a bifactor Rasch model…
Descriptors: Cloze Procedure, Test Bias, Individual Differences, Difficulty Level
Kevelson, Marisol J. C. – ETS Research Report Series, 2019
This study presents estimates of Black-White, Hispanic-White, and income achievement gaps using data from two different types of reading and mathematics assessments: constructed-response assessments that were likely more cognitively demanding and state achievement tests that were likely less cognitively demanding (i.e., composed solely or largely…
Descriptors: Racial Differences, Achievement Gap, White Students, African American Students
Chen, Jing; Sheehan, Kathleen M. – ETS Research Report Series, 2015
The "TOEFL"® family of assessments includes the "TOEFL"® Primary"™, "TOEFL Junior"®, and "TOEFL iBT"® tests. The linguistic complexity of stimulus passages in the reading sections of the TOEFL family of assessments is expected to differ across the test levels. This study evaluates the linguistic…
Descriptors: Language Tests, Second Language Learning, English (Second Language), Reading Comprehension
Cawthon, Stephanie – American Annals of the Deaf, 2011
Linguistic complexity of test items is one test format element that has been studied in the context of struggling readers and their participation in paper-and-pencil tests. The present article presents findings from an exploratory study on the potential relationship between linguistic complexity and test performance for deaf readers. A total of 64…
Descriptors: Language Styles, Test Content, Syntax, Linguistics
Powers, Sonya; Turhan, Ahmet; Binici, Salih – Pearson, 2012
The population sensitivity of vertical scaling results was evaluated for a state reading assessment spanning grades 3-10 and a state mathematics test spanning grades 3-8. Subpopulations considered included males and females. The 3-parameter logistic model was used to calibrate math and reading items and a common item design was used to construct…
Descriptors: Scaling, Equated Scores, Standardized Tests, Reading Tests
Meyers, Jason L.; Murphy, Stephen; Goodman, Joshua; Turhan, Ahmet – Pearson, 2012
Operational testing programs employing item response theory (IRT) applications benefit from of the property of item parameter invariance whereby item parameter estimates obtained from one sample can be applied to other samples (when the underlying assumptions are satisfied). In theory, this feature allows for applications such as computer-adaptive…
Descriptors: Equated Scores, Test Items, Test Format, Item Response Theory
Liao, Chi-Wen; Livingston, Samuel A. – ETS Research Report Series, 2008
Randomly equivalent forms (REF) of tests in listening and reading for nonnative speakers of English were created by stratified random assignment of items to forms, stratifying on item content and predicted difficulty. The study included 50 replications of the procedure for each test. Each replication generated 2 REFs. The equivalence of those 2…
Descriptors: Equated Scores, Item Analysis, Test Items, Difficulty Level
Previous Page | Next Page »
Pages: 1 | 2