NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 5 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Militsa G. Ivanova; Hanna Eklöf; Michalis P. Michaelides – Journal of Applied Testing Technology, 2025
Digital administration of assessments allows for the collection of process data indices, such as response time, which can serve as indicators of rapid-guessing and examinee test-taking effort. Setting a time threshold is essential to distinguish effortful from effortless behavior using item response times. Threshold identification methods may…
Descriptors: Test Items, Computer Assisted Testing, Reaction Time, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Joanna Tomkowicz; Andy Porter; Corey Palermo – Journal of Applied Testing Technology, 2024
Little evidence exists regarding students' actual use of testing time in naturalistic settings, particularly in the context of state accountability assessments. This study investigates students' test completion time and performance in the context of a statewide, English Language Arts and Mathematics computer-based assessment administered in grades…
Descriptors: Time, Computer Assisted Testing, Achievement Tests, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Laughlin Davis, Laurie; Morrison, Kristin; Zhou-Yile Schnieders, Joyce; Marsh, Benjamin – Journal of Applied Testing Technology, 2021
With the shift to next generation digital assessments, increased attention has focused on Technology-Enhanced Assessments and Items (TEIs). This study evaluated the feasibility of a high-fidelity digital assessment item response format, which allows students to solve mathematics questions on a tablet using a digital pen. This digital ink approach…
Descriptors: Computer Assisted Testing, Mathematics Instruction, Technology Uses in Education, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Lissitz, Robert W.; Hou, Xiaodong; Slater, Sharon Cadman – Journal of Applied Testing Technology, 2012
This article investigates several questions regarding the impact of different item formats on measurement characteristics. Constructed response (CR) items and multiple choice (MC) items obviously differ in their formats and in the resources needed to score them. As such, they have been the subject of considerable discussion regarding the impact of…
Descriptors: Computer Assisted Testing, Scoring, Evaluation Problems, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Kobrin, Jennifer L.; Deng, Hui; Shaw, Emily J. – Journal of Applied Testing Technology, 2007
This study was designed to address two frequent criticisms of the SAT essay--that essay length is the best predictor of scores, and that there is an advantage in using more "sophisticated" examples as opposed to personal experience. The study was based on 2,820 essays from the first three administrations of the new SAT. Each essay was…
Descriptors: Testing Programs, Computer Assisted Testing, Construct Validity, Writing Skills