Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 2 |
| Since 2007 (last 20 years) | 6 |
Descriptor
| Accuracy | 6 |
| Bayesian Statistics | 6 |
| Scoring | 6 |
| Item Response Theory | 5 |
| Computation | 3 |
| Classification | 2 |
| Data Analysis | 2 |
| Maximum Likelihood Statistics | 2 |
| Measures (Individuals) | 2 |
| Test Items | 2 |
| Ability | 1 |
| More ▼ | |
Source
| ETS Research Report Series | 1 |
| Educational and Psychological… | 1 |
| International Educational… | 1 |
| Journal of Educational Data… | 1 |
| Journal of Psychoeducational… | 1 |
| Practical Assessment,… | 1 |
Author
| Chen, Fu | 1 |
| Chu, Man-Wai | 1 |
| Cui, Yang | 1 |
| Davis, Richard L. | 1 |
| Domingue, Benjamin W. | 1 |
| Eddy, Colleen | 1 |
| Goodman, Noah | 1 |
| He, Wei | 1 |
| Kieftenbeld, Vincent | 1 |
| Kim, Sooyeon | 1 |
| Moses, Tim | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 5 |
| Reports - Research | 4 |
| Reports - Descriptive | 1 |
| Reports - Evaluative | 1 |
| Speeches/Meeting Papers | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
| MacArthur Communicative… | 1 |
| Program for International… | 1 |
What Works Clearinghouse Rating
Cui, Yang; Chu, Man-Wai; Chen, Fu – Journal of Educational Data Mining, 2019
Digital game-based assessments generate student process data that is much more difficult to analyze than traditional assessments. The formative nature of game-based assessments permits students, through applying and practicing the targeted knowledge and skills during gameplay, to gain experiences, receive immediate feedback, and as a result,…
Descriptors: Educational Games, Student Evaluation, Data Analysis, Bayesian Statistics
Wu, Mike; Davis, Richard L.; Domingue, Benjamin W.; Piech, Chris; Goodman, Noah – International Educational Data Mining Society, 2020
Item Response Theory (IRT) is a ubiquitous model for understanding humans based on their responses to questions, used in fields as diverse as education, medicine and psychology. Large modern datasets offer opportunities to capture more nuances in human behavior, potentially improving test scoring and better informing public policy. Yet larger…
Descriptors: Item Response Theory, Accuracy, Data Analysis, Public Policy
Kim, Sooyeon; Moses, Tim; Yoo, Hanwook Henry – ETS Research Report Series, 2015
The purpose of this inquiry was to investigate the effectiveness of item response theory (IRT) proficiency estimators in terms of estimation bias and error under multistage testing (MST). We chose a 2-stage MST design in which 1 adaptation to the examinees' ability levels takes place. It includes 4 modules (1 at Stage 1, 3 at Stage 2) and 3 paths…
Descriptors: Item Response Theory, Computation, Statistical Bias, Error of Measurement
He, Wei; Wolfe, Edward W. – Educational and Psychological Measurement, 2012
In administration of individually administered intelligence tests, items are commonly presented in a sequence of increasing difficulty, and test administration is terminated after a predetermined number of incorrect answers. This practice produces stochastically censored data, a form of nonignorable missing data. By manipulating four factors…
Descriptors: Individual Testing, Intelligence Tests, Test Items, Test Length
Kieftenbeld, Vincent; Natesan, Prathiba; Eddy, Colleen – Journal of Psychoeducational Assessment, 2011
The mathematics teaching efficacy beliefs of preservice elementary teachers have been the subject of several studies. A widely used measure in these studies is the Mathematics Teaching Efficacy Beliefs Instrument (MTEBI). The present study provides a detailed analysis of the psychometric properties of the MTEBI using Bayesian item response theory.…
Descriptors: Item Response Theory, Bayesian Statistics, Mathematics Instruction, Preservice Teachers
Rudner, Lawrence M. – Practical Assessment, Research & Evaluation, 2009
This paper describes and evaluates the use of measurement decision theory (MDT) to classify examinees based on their item response patterns. The model has a simple framework that starts with the conditional probabilities of examinees in each category or mastery state responding correctly to each item. The presented evaluation investigates: (1) the…
Descriptors: Classification, Scoring, Item Response Theory, Measurement

Peer reviewed
Direct link
