NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 44 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Jessie Leigh Nielsen; Rikke Vang Christensen; Mads Poulsen – Journal of Research in Reading, 2024
Background: Studies of syntactic comprehension and reading comprehension use a wide range of syntactic comprehension tests that vary considerably in format. The goal of this study was to examine to which extent different formats of syntactic comprehension tests measure the same construct. Methods: Sixty-nine Grade 4 students completed multiple…
Descriptors: Syntax, Reading Comprehension, Comparative Analysis, Reading Tests
Marini, Jessica P.; Westrick, Paul A.; Young, Linda; Shaw, Emily J. – College Board, 2022
This study examines relationships between digital SAT scores and other relevant educational measures, such as high school grade point average (HSGPA), PSAT/NMSQT Total score, and Average AP Exam score, and compares those relationships to current paper and pencil SAT score relationships with the same measures. This information can provide…
Descriptors: Scores, College Entrance Examinations, Comparative Analysis, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Ben-Yehudah, Gal; Eshet-Alkalai, Yoram – British Journal of Educational Technology, 2021
The use of digital environments for both learning and assessment is becoming prevalent. This often leads to incongruent situations, in which the study medium (eg, printed textbook) is different from the testing medium (eg, online multiple-choice exams). Despite some evidence that incongruent study-test situations are associated with inferior…
Descriptors: Reading Comprehension, Reading Tests, Test Format, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Shear, Benjamin R. – Journal of Educational Measurement, 2023
Large-scale standardized tests are regularly used to measure student achievement overall and for student subgroups. These uses assume tests provide comparable measures of outcomes across student subgroups, but prior research suggests score comparisons across gender groups may be complicated by the type of test items used. This paper presents…
Descriptors: Gender Bias, Item Analysis, Test Items, Achievement Tests
Wang, Shichao; Li, Dongmei; Steedle, Jeffrey – ACT, Inc., 2021
Speeded tests set time limits so that few examinees can reach all items, and power tests allow most test-takers sufficient time to attempt all items. Educational achievement tests are sometimes described as "timed power tests" because the amount of time provided is intended to allow nearly all students to complete the test, yet this…
Descriptors: Timed Tests, Test Items, Achievement Tests, Testing
Wang, Zuowei; O'Reilly, Tenaha; Sabatini, John; McCarthy, Kathryn S.; McNamara, Danielle S. – Grantee Submission, 2021
We compared high school students' performance in a traditional comprehension assessment requiring them to identify key information and draw inferences from single texts, and a scenario-based assessment (SBA) requiring them to integrate, evaluate and apply information across multiple sources. Both assessments focused on a non-academic topic.…
Descriptors: Comparative Analysis, High School Students, Inferences, Reading Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Liao, Ray J. T. – Reading in a Foreign Language, 2021
This study was to compare the processes that second language learners implemented when completing reading tasks in a multiple-choice question format (MCQ) with those in a short-answer question format (SAQ). Sixteen nonnative English-speaking students from a large Midwestern college in the United States were invited to complete MCQ and SAQ English…
Descriptors: Multiple Choice Tests, Test Format, Comparative Analysis, Reading Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dasher, Holly; Pilgrim, Jodi – Texas Journal of Literacy Education, 2022
Schools around the nation are increasingly offering online testing options. House Bill (HB) 3906, passed by the 86th Texas Legislature in 2019, resulted in the STAAR redesign, which will be administered in the 2022--2023 school year. The STAAR redesign includes several components including an online test administration for the STAAR. With the…
Descriptors: Test Format, Reading Tests, School Districts, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Smail, Layes; Sana, Tibi; Yamina, Bouakkaz; Rebai, Mohamed – Reading & Writing Quarterly, 2022
This study examined whether the phonological awareness (PA) deficit in Arabic speaking dyslexic children could be impacted by the presence vs. absence of verbal working memory (WM) as function of the sensory modality of administration (auditory vs. visual) of the phonological tests. Three phonological awareness (PA) tasks, i.e., phoneme…
Descriptors: Phonological Awareness, Dyslexia, Short Term Memory, Verbal Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Ahyoung Alicia; Tywoniw, Rurik L.; Chapman, Mark – Language Assessment Quarterly, 2022
Technology-enhanced items (TEIs) are innovative, computer-delivered test items that allow test takers to better interact with the test environment compared to traditional multiple-choice items (MCIs). The interactive nature of TEIs offer improved construct coverage compared with MCIs but little research exists regarding students' performance on…
Descriptors: Language Tests, Test Items, Computer Assisted Testing, English (Second Language)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Liao, Linyu – English Language Teaching, 2020
As a high-stakes standardized test, IELTS is expected to have comparable forms of test papers so that test takers from different test administration on different dates receive comparable test scores. Therefore, this study examined the text difficulty and task characteristics of four parallel academic IELTS reading tests to reveal to what extent…
Descriptors: Second Language Learning, English (Second Language), Language Tests, High Stakes Tests
Wang, Lu; Steedle, Jeffrey – ACT, Inc., 2020
In recent ACT mode comparability studies, students testing on laptop or desktop computers earned slightly higher scores on average than students who tested on paper, especially on the ACT® reading and English tests (Li et al., 2017). Equating procedures adjust for such "mode effects" to make ACT scores comparable regardless of testing…
Descriptors: Test Format, Reading Tests, Language Tests, English
Peer reviewed Peer reviewed
Direct linkDirect link
Arnold, Sharon; Reed, Phil – British Journal of Special Education, 2019
Approximately 30% of school-aged individuals with autism spectrum disorder (ASD) are nonverbal (that is, they have little or no spontaneous spoken language). Most reading tests require verbalisation, which may under-estimate reading ability in this group. To determine decoding abilities of nonverbal children with ASD (with an age range of 72 to…
Descriptors: Word Recognition, Autism, Pervasive Developmental Disorders, Decoding (Reading)
Hildenbrand, Lena; Wiley, Jennifer – Grantee Submission, 2021
Many studies have demonstrated that testing students on to-be-learned materials can be an effective learning activity. However, past studies have also shown that some practice test formats are more effective than others. Open-ended recall or short answer practice tests may be effective because the questions prompt deeper processing as students…
Descriptors: Test Format, Outcomes of Education, Cognitive Processes, Learning Activities
Peer reviewed Peer reviewed
Direct linkDirect link
Kaya, Elif; O'Grady, Stefan; Kalender, Ilker – Language Testing, 2022
Language proficiency testing serves an important function of classifying examinees into different categories of ability. However, misclassification is to some extent inevitable and may have important consequences for stakeholders. Recent research suggests that classification efficacy may be enhanced substantially using computerized adaptive…
Descriptors: Item Response Theory, Test Items, Language Tests, Classification
Previous Page | Next Page »
Pages: 1  |  2  |  3