NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
Assessments and Surveys
National Assessment of…1
What Works Clearinghouse Rating
Showing all 14 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Myers, Matthew C.; Wilson, Joshua – International Journal of Artificial Intelligence in Education, 2023
This study evaluated the construct validity of six scoring traits of an automated writing evaluation (AWE) system called "MI Write." Persuasive essays (N = 100) written by students in grades 7 and 8 were randomized at the sentence-level using a script written with Python's NLTK module. Each persuasive essay was randomized 30 times (n =…
Descriptors: Construct Validity, Automation, Writing Evaluation, Algorithms
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Pásztor, Attila; Magyar, Andrea; Pásztor-Kovács, Anita; Rausch, Attila – Journal of Intelligence, 2022
The aims of the study were (1) to develop a domain-general computer-based assessment tool for inductive reasoning and to empirically test the theoretical models of Klauer and Christou and Papageorgiou; and (2) to develop an online game to foster inductive reasoning through mathematical content and to investigate its effectiveness. The sample was…
Descriptors: Game Based Learning, Logical Thinking, Computer Assisted Testing, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Cohen, Dale J.; Ballman, Alesha; Rijmen, Frank; Cohen, Jon – Applied Measurement in Education, 2020
Computer-based, pop-up glossaries are perhaps the most promising accommodation aimed at mitigating the influence of linguistic structure and cultural bias on the performance of English Learner (EL) students on statewide assessments. To date, there is no established procedure for identifying the words that require a glossary for EL students that is…
Descriptors: Glossaries, Testing Accommodations, English Language Learners, Computer Assisted Testing
Durán, Richard P.; Zhang, Ting; Sañosa, David; Stancavage, Fran – American Institutes for Research, 2020
The National Assessment of Educational Progress's (NAEP's) transition to an entirely digitally based assessment (DBA) began in 2017. As part of this transition, new types of NAEP items have begun to be developed that leverage the DBA environment to measure a wider range of knowledge and skills. These new item types include the science…
Descriptors: National Competency Tests, Computer Assisted Testing, Science Tests, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Ahyoung Alicia; Tywoniw, Rurik L.; Chapman, Mark – Language Assessment Quarterly, 2022
Technology-enhanced items (TEIs) are innovative, computer-delivered test items that allow test takers to better interact with the test environment compared to traditional multiple-choice items (MCIs). The interactive nature of TEIs offer improved construct coverage compared with MCIs but little research exists regarding students' performance on…
Descriptors: Language Tests, Test Items, Computer Assisted Testing, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Wilson, Joshua; Chen, Dandan; Sandbank, Micheal P.; Hebert, Michael – Journal of Educational Psychology, 2019
The present study examined issues pertaining to the reliability of writing assessment in the elementary grades, and among samples of struggling and nonstruggling writers. The present study also extended nascent research on the reliability and the practical applications of automated essay scoring (AES) systems in Response to Intervention frameworks…
Descriptors: Computer Assisted Testing, Automation, Scores, Writing Tests
Davison, Mark L.; Biancarosa, Gina; Carlson, Sarah E.; Seipel, Ben; Liu, Bowen – Assessment for Effective Intervention, 2018
The computer-administered Multiple-Choice Online Causal Comprehension Assessment (MOCCA) for Grades 3 to 5 has an innovative, 40-item multiple-choice structure in which each distractor corresponds to a comprehension process upon which poor comprehenders have been shown to rely. This structure requires revised thinking about measurement issues…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Pilot Projects, Measurement
Davison, Mark L.; Biancarosa, Gina; Carlson, Sarah E.; Seipel, Ben; Liu, Bowen – Grantee Submission, 2018
The computer-administered Multiple-Choice Online Causal Comprehension Assessment (MOCCA) for Grades 3 to 5 has an innovative, 40-item multiple-choice structure in which each distractor corresponds to a comprehension process upon which poor comprehenders have been shown to rely. This structure requires revised thinking about measurement issues…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Pilot Projects, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Mix, Daniel F.; Tao, Shuqin – AERA Online Paper Repository, 2017
Purposes: This study uses think-alouds and cognitive interviews to provide validity evidence for an online formative assessment--i-Ready Standards Mastery (iSM) mini-assessments--which involves a heavy use of innovative items. iSM mini-assessments are intended to help teachers determine student understanding of each of the on-grade-level Common…
Descriptors: Formative Evaluation, Computer Assisted Testing, Test Validity, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Ginsburg, Herbert P.; Lee, Young-Sun; Pappas, Sandra – ZDM: The International Journal on Mathematics Education, 2016
This paper investigates the power of the computer guided clinical interview (CI) and new curriculum based measurement (CBM) measures to identify and help children at risk of low mathematics achievement. We use data from large numbers of children in Kindergarten through Grade 3 to investigate the construct validity of CBM risk categories. The basic…
Descriptors: Interviews, Curriculum Based Assessment, Evaluation Methods, At Risk Students
Peer reviewed Peer reviewed
Direct linkDirect link
Lin, Sheau-Wen; Liu, Yu; Chen, Shin-Feng; Wang, Jing-Ru; Kao, Huey-Lien – International Journal of Science and Mathematics Education, 2015
The purpose of this study was to develop a computer-based assessment for elementary school students' listening comprehension of science talk within an inquiry-oriented environment. The development procedure had 3 steps: a literature review to define the framework of the test, collecting and identifying key constructs of science talk, and…
Descriptors: Listening Comprehension, Science Education, Computer Assisted Testing, Test Construction
Wang, Shudong; McCall, Marty; Jiao, Hong; Harris, Gregg – Online Submission, 2012
The purposes of this study are twofold. First, to investigate the construct or factorial structure of a set of Reading and Mathematics computerized adaptive tests (CAT), "Measures of Academic Progress" (MAP), given in different states at different grades and academic terms. The second purpose is to investigate the invariance of test…
Descriptors: Construct Validity, Factor Structure, Adaptive Testing, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Kuo, Che-Yu; Wu, Hsin-Kai; Jen, Tsung-Hau; Hsu, Ying-Shao – International Journal of Science Education, 2015
The potential of computer-based assessments for capturing complex learning outcomes has been discussed; however, relatively little is understood about how to leverage such potential for summative and accountability purposes. The aim of this study is to develop and validate a multimedia-based assessment of scientific inquiry abilities (MASIA) to…
Descriptors: Multimedia Materials, Program Development, Program Validation, Test Construction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Forster, Natalie; Souvignier, Elmar – Learning Disabilities: A Contemporary Journal, 2011
The purpose of this study was to examine the technical adequacy of a computer-based assessment instrument which is based on hierarchical models of text comprehension for monitoring student reading progress following the Curriculum-Based Measurement (CBM) approach. At intervals of two weeks, 120 third-grade students finished eight CBM tests. To…
Descriptors: Educational Needs, Intervals, Curriculum Based Assessment, Computer Assisted Testing