NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20260
Since 20250
Since 2022 (last 5 years)0
Since 2017 (last 10 years)2
Since 2007 (last 20 years)6
Audience
Location
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 6 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Cui, Yang; Chu, Man-Wai; Chen, Fu – Journal of Educational Data Mining, 2019
Digital game-based assessments generate student process data that is much more difficult to analyze than traditional assessments. The formative nature of game-based assessments permits students, through applying and practicing the targeted knowledge and skills during gameplay, to gain experiences, receive immediate feedback, and as a result,…
Descriptors: Educational Games, Student Evaluation, Data Analysis, Bayesian Statistics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wu, Mike; Davis, Richard L.; Domingue, Benjamin W.; Piech, Chris; Goodman, Noah – International Educational Data Mining Society, 2020
Item Response Theory (IRT) is a ubiquitous model for understanding humans based on their responses to questions, used in fields as diverse as education, medicine and psychology. Large modern datasets offer opportunities to capture more nuances in human behavior, potentially improving test scoring and better informing public policy. Yet larger…
Descriptors: Item Response Theory, Accuracy, Data Analysis, Public Policy
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kim, Sooyeon; Moses, Tim; Yoo, Hanwook Henry – ETS Research Report Series, 2015
The purpose of this inquiry was to investigate the effectiveness of item response theory (IRT) proficiency estimators in terms of estimation bias and error under multistage testing (MST). We chose a 2-stage MST design in which 1 adaptation to the examinees' ability levels takes place. It includes 4 modules (1 at Stage 1, 3 at Stage 2) and 3 paths…
Descriptors: Item Response Theory, Computation, Statistical Bias, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
He, Wei; Wolfe, Edward W. – Educational and Psychological Measurement, 2012
In administration of individually administered intelligence tests, items are commonly presented in a sequence of increasing difficulty, and test administration is terminated after a predetermined number of incorrect answers. This practice produces stochastically censored data, a form of nonignorable missing data. By manipulating four factors…
Descriptors: Individual Testing, Intelligence Tests, Test Items, Test Length
Peer reviewed Peer reviewed
Direct linkDirect link
Kieftenbeld, Vincent; Natesan, Prathiba; Eddy, Colleen – Journal of Psychoeducational Assessment, 2011
The mathematics teaching efficacy beliefs of preservice elementary teachers have been the subject of several studies. A widely used measure in these studies is the Mathematics Teaching Efficacy Beliefs Instrument (MTEBI). The present study provides a detailed analysis of the psychometric properties of the MTEBI using Bayesian item response theory.…
Descriptors: Item Response Theory, Bayesian Statistics, Mathematics Instruction, Preservice Teachers
Peer reviewed Peer reviewed
Direct linkDirect link
Rudner, Lawrence M. – Practical Assessment, Research & Evaluation, 2009
This paper describes and evaluates the use of measurement decision theory (MDT) to classify examinees based on their item response patterns. The model has a simple framework that starts with the conditional probabilities of examinees in each category or mastery state responding correctly to each item. The presented evaluation investigates: (1) the…
Descriptors: Classification, Scoring, Item Response Theory, Measurement