NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Yang; Yang, Ji Seung – Journal of Educational and Behavioral Statistics, 2018
The uncertainty arising from item parameter estimation is often not negligible and must be accounted for when calculating latent variable (LV) scores in item response theory (IRT). It is particularly so when the calibration sample size is limited and/or the calibration IRT model is complex. In the current work, we treat two-stage IRT scoring as a…
Descriptors: Intervals, Scores, Item Response Theory, Bayesian Statistics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wu, Mike; Davis, Richard L.; Domingue, Benjamin W.; Piech, Chris; Goodman, Noah – International Educational Data Mining Society, 2020
Item Response Theory (IRT) is a ubiquitous model for understanding humans based on their responses to questions, used in fields as diverse as education, medicine and psychology. Large modern datasets offer opportunities to capture more nuances in human behavior, potentially improving test scoring and better informing public policy. Yet larger…
Descriptors: Item Response Theory, Accuracy, Data Analysis, Public Policy
Suh, Hongwook – ProQuest LLC, 2010
Response time has been regarded as an important source for investigating the relationship between human performance and response speed. It is important to examine the relationship between response time and item characteristics, especially in the perspective of the relationship between response time and various factors that affect examinee's…
Descriptors: Bayesian Statistics, Computation, Reaction Time, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Mariano, Louis T.; Junker, Brian W. – Journal of Educational and Behavioral Statistics, 2007
When constructed response test items are scored by more than one rater, the repeated ratings allow for the consideration of individual rater bias and variability in estimating student proficiency. Several hierarchical models based on item response theory have been introduced to model such effects. In this article, the authors demonstrate how these…
Descriptors: Test Items, Item Response Theory, Rating Scales, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
de la Torre, Jimmy; Patz, Richard J. – Journal of Educational and Behavioral Statistics, 2005
This article proposes a practical method that capitalizes on the availability of information from multiple tests measuring correlated abilities given in a single test administration. By simultaneously estimating different abilities with the use of a hierarchical Bayesian framework, more precise estimates for each ability dimension are obtained.…
Descriptors: Scoring, Markov Processes, Item Response Theory, Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Stark, Stephen; Chernyshenko, Oleksandr S.; Drasgow, Fritz – Applied Psychological Measurement, 2005
This article proposes an item response theory (IRT) approach to constructing and scoring multidimensional pairwise preference items. Individual statements are administered and calibrated using a unidimensional single-stimulus model. Tests are created by combining multidimensional items with a small number of unidimensional pairings needed to…
Descriptors: Test Construction, Scoring, Mathematical Models, Item Response Theory
Kirisci, Levent; Hsu, Tse-Chi – 1992
A predictive adaptive testing (PAT) strategy was developed based on statistical predictive analysis, and its feasibility was studied by comparing PAT performance to those of the Flexilevel, Bayesian modal, and expected a posteriori (EAP) strategies in a simulated environment. The proposed adaptive test is based on the idea of using item difficulty…
Descriptors: Adaptive Testing, Bayesian Statistics, Comparative Analysis, Computer Assisted Testing