NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Ohio Department of Education and Workforce, 2025
The Ohio Department of Education and Workforce, in response to Senate Bill 168 (135th General Assembly), initiated a pilot program in the 2024-2025 school year to test the feasibility of remotely administered and proctored state assessments. This pilot aimed to explore the potential of remote testing to enhance flexibility and accessibility for…
Descriptors: Examiners, Supervision, Electronic Learning, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Sarah N. Shakir; Ashley M. Virabouth; Mallory M. Rice – American Biology Teacher, 2025
Exam anxiety has been well-documented to reduce student performance in undergraduate biology courses, especially for students from marginalized groups, which can contribute to achievement gaps. Our exploratory study surveyed 61 undergraduate biology students to better understand how exams affect their anxiety levels, focusing on the impact of exam…
Descriptors: Undergraduate Students, College Science, Biology, Student Attitudes
Peer reviewed Peer reviewed
Direct linkDirect link
Wallace N. Pinto Jr.; Jinnie Shin – Journal of Educational Measurement, 2025
In recent years, the application of explainability techniques to automated essay scoring and automated short-answer grading (ASAG) models, particularly those based on transformer architectures, has gained significant attention. However, the reliability and consistency of these techniques remain underexplored. This study systematically investigates…
Descriptors: Automation, Grading, Computer Assisted Testing, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Nathaniel Owen; Ananda Senel – Review of Education, 2025
Transparency in high-stakes English language assessment has become crucial for ensuring fairness and maintaining assessment validity in language testing. However, our understanding of how transparency is conceptualised and implemented remains fragmented, particularly in relation to stakeholder experiences and technological innovations. This study…
Descriptors: Accountability, High Stakes Tests, Language Tests, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Andrew Runge; Sarah Goodwin; Yigal Attali; Mya Poe; Phoebe Mulcaire; Kai-Ling Lo; Geoffrey T. LaFlair – Language Testing, 2025
A longstanding criticism of traditional high-stakes writing assessments is their use of static prompts in which test takers compose a single text in response to a prompt. These static prompts do not allow measurement of the writing process. This paper describes the development and validation of an innovative interactive writing task. After the…
Descriptors: Material Development, Writing Evaluation, Writing Assignments, Writing Skills
Peer reviewed Peer reviewed
Direct linkDirect link
Ying Xu; Xiaodong Li; Jin Chen – Language Testing, 2025
This article provides a detailed review of the Computer-based English Listening Speaking Test (CELST) used in Guangdong, China, as part of the National Matriculation English Test (NMET) to assess students' English proficiency. The CELST measures listening and speaking skills as outlined in the "English Curriculum for Senior Middle…
Descriptors: Computer Assisted Testing, English (Second Language), Language Tests, Listening Comprehension Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Kayla V. Campaña; Benjamin G. Solomon – Assessment for Effective Intervention, 2025
The purpose of this study was to compare the classification accuracy of data produced by the previous year's end-of-year New York state assessment, a computer-adaptive diagnostic assessment ("i-Ready"), and the gating combination of both assessments to predict the rate of students passing the following year's end-of-year state assessment…
Descriptors: Accuracy, Classification, Diagnostic Tests, Adaptive Testing