NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Grantee Submission15
Audience
Location
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 15 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Sun-Joo Cho; Goodwin Amanda; Jorge Salas; Sophia Mueller – Grantee Submission, 2025
This study incorporates a random forest (RF) approach to probe complex interactions and nonlinearity among predictors into an item response model with the goal of using a hybrid approach to outperform either an RF or explanatory item response model (EIRM) only in explaining item responses. In the specified model, called EIRM-RF, predicted values…
Descriptors: Item Response Theory, Artificial Intelligence, Statistical Analysis, Predictor Variables
Ying Fang; Rod D. Roscoe; Danielle S. McNamara – Grantee Submission, 2023
Artificial Intelligence (AI) based assessments are commonly used in a variety of settings including business, healthcare, policing, manufacturing, and education. In education, AI-based assessments undergird intelligent tutoring systems as well as many tools used to evaluate students and, in turn, guide learning and instruction. This chapter…
Descriptors: Artificial Intelligence, Computer Assisted Testing, Student Evaluation, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Sami Baral; Eamon Worden; Wen-Chiang Lim; Zhuang Luo; Christopher Santorelli; Ashish Gurung; Neil Heffernan – Grantee Submission, 2024
The effectiveness of feedback in enhancing learning outcomes is well documented within Educational Data Mining (EDM). Various prior research have explored methodologies to enhance the effectiveness of feedback to students in various ways. Recent developments in Large Language Models (LLMs) have extended their utility in enhancing automated…
Descriptors: Automation, Scoring, Computer Assisted Testing, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Xin Wei – Grantee Submission, 2025
This study investigates the time-use patterns of students with learning disabilities during digital mathematics assessments and explores the role of extended time accommodations (ETA) in shaping these patterns. Using latent profile analysis, four distinct time-use profiles were identified separately for students with and without ETA. "Initial…
Descriptors: Computer Assisted Testing, Mathematics Tests, Students with Disabilities, Testing Accommodations
Panaite, Marilena; Ruseti, Stefan; Dascalu, Mihai; Balyan, Renu; McNamara, Danielle S.; Trausan-Matu, Stefan – Grantee Submission, 2019
Intelligence Tutoring Systems (ITSs) focus on promoting knowledge acquisition, while providing relevant feedback during students' practice. Self-explanation practice is an effective method used to help students understand complex texts by leveraging comprehension. Our aim is to introduce a deep learning neural model for automatically scoring…
Descriptors: Computer Assisted Testing, Scoring, Intelligent Tutoring Systems, Natural Language Processing
Albano, Anthony D.; McConnell, Scott R.; Lease, Erin M.; Cai, Liuhan – Grantee Submission, 2020
Research has shown that the context of practice tasks can have a significant impact on learning, with long-term retention and transfer improving when tasks of different types are mixed by interleaving (abcabcabc) compared with grouping together in blocks (aaabbbccc). This study examines the influence of context via interleaving from a psychometric…
Descriptors: Context Effect, Test Items, Preschool Children, Computer Assisted Testing
Jing Lu; Chun Wang; Jiwei Zhang; Xue Wang – Grantee Submission, 2023
Changepoints are abrupt variations in a sequence of data in statistical inference. In educational and psychological assessments, it is pivotal to properly differentiate examinees' aberrant behaviors from solution behavior to ensure test reliability and validity. In this paper, we propose a sequential Bayesian changepoint detection algorithm to…
Descriptors: Bayesian Statistics, Behavior Patterns, Computer Assisted Testing, Accuracy
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ben Seipel; Sarah E. Carlson; Virginia Clinton-Lisell; Mark L. Davison; Patrick C. Kennedy – Grantee Submission, 2022
Originally designed for students in Grades 3 through 5, MOCCA (formerly the Multiple-choice Online Causal Comprehension Assessment), identifies students who struggle with comprehension, and helps uncover why they struggle. There are many reasons why students might not comprehend what they read. They may struggle with decoding, or reading words…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Diagnostic Tests, Reading Tests
Goodwin, Amanda P.; Petscher, Yaacov; Tock, Jamie – Grantee Submission, 2021
Background: Middle school students use the information conveyed by morphemes (i.e., units of meaning such as prefixes, root words and suffixes) in different ways to support their literacy endeavours, suggesting the likelihood that morphological knowledge is multidimensional. This has important implications for assessment. Methods: The current…
Descriptors: Morphology (Languages), Morphemes, Middle School Students, Knowledge Level
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lee, Hee-Sun; McNamara, Danielle; Bracey, Zoë Buck; Wilson, Christopher; Osborne, Jonathan; Haudek, Kevin C.; Liu, Ou Lydia; Pallant, Amy; Gerard, Libby; Linn, Marcia C.; Sherin, Bruce – Grantee Submission, 2019
Rapid advancements in computing have enabled automatic analyses of written texts created in educational settings. The purpose of this symposium is to survey several applications of computerized text analyses used in the research and development of productive learning environments. Four featured research projects have developed or been working on:…
Descriptors: Computational Linguistics, Written Language, Computer Assisted Testing, Scoring
Davison, Mark L.; Biancarosa, Gina; Carlson, Sarah E.; Seipel, Ben; Liu, Bowen – Grantee Submission, 2018
The computer-administered Multiple-Choice Online Causal Comprehension Assessment (MOCCA) for Grades 3 to 5 has an innovative, 40-item multiple-choice structure in which each distractor corresponds to a comprehension process upon which poor comprehenders have been shown to rely. This structure requires revised thinking about measurement issues…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Pilot Projects, Measurement
Allen, Laura K.; Likens, Aaron D.; McNamara, Danielle S. – Grantee Submission, 2018
The assessment of writing proficiency generally includes analyses of the specific linguistic and rhetorical features contained in the singular essays produced by students. However, researchers have recently proposed that an individual's ability to flexibly adapt the linguistic properties of their writing might more closely capture writing skill.…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Writing Skills
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hardcastle, Joseph; Herrmann-Abell, Cari F.; DeBoer, George E. – Grantee Submission, 2017
Can student performance on computer-based tests (CBT) and paper-and-pencil tests (PPT) be considered equivalent measures of student knowledge? States and school districts are grappling with this question, and although studies addressing this question are growing, additional research is needed. We report on the performance of students who took…
Descriptors: Academic Achievement, Computer Assisted Testing, Comparative Analysis, Student Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Roscoe, Rod D.; Varner, Laura K.; Crossley, Scott A.; McNamara, Danielle S. – Grantee Submission, 2013
Various computer tools have been developed to support educators' assessment of student writing, including automated essay scoring and automated writing evaluation systems. Research demonstrates that these systems exhibit relatively high scoring accuracy but uncertain instructional efficacy. Students' writing proficiency does not necessarily…
Descriptors: Writing Instruction, Intelligent Tutoring Systems, Computer Assisted Testing, Writing Evaluation
Koedinger, Kenneth R.; McLaughlin, Elizabeth A.; Heffernan, Neil T. – Grantee Submission, 2010
ASSISTments is a web-based math tutor designed to address the need for timely student assessment while simultaneously providing instruction, thereby avoiding lost instruction time that typically occurs during assessment. This article presents a quasi-experiment that evaluates whether ASSISTments use has an effect on improving middle school…
Descriptors: Feedback (Response), Middle School Students, Formative Evaluation, Grade 7