NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Yang; Yang, Ji Seung – Journal of Educational and Behavioral Statistics, 2018
The uncertainty arising from item parameter estimation is often not negligible and must be accounted for when calculating latent variable (LV) scores in item response theory (IRT). It is particularly so when the calibration sample size is limited and/or the calibration IRT model is complex. In the current work, we treat two-stage IRT scoring as a…
Descriptors: Intervals, Scores, Item Response Theory, Bayesian Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Yang, Ji Seung; Zheng, Xiaying – Journal of Educational and Behavioral Statistics, 2018
The purpose of this article is to introduce and review the capability and performance of the Stata item response theory (IRT) package that is available from Stata v.14, 2015. Using a simulated data set and a publicly available item response data set extracted from Programme of International Student Assessment, we review the IRT package from…
Descriptors: Item Response Theory, Item Analysis, Computer Software, Statistical Analysis
Ostrow, Korinn; Donnelly, Chistopher; Heffernan, Neil – International Educational Data Mining Society, 2015
As adaptive tutoring systems grow increasingly popular for the completion of classwork and homework, it is crucial to assess the manner in which students are scored within these platforms. The majority of systems, including ASSISTments, return the binary correctness of a student's first attempt at solving each problem. Yet for many teachers,…
Descriptors: Intelligent Tutoring Systems, Scoring, Testing, Credits
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Ze; Rohrer, David; Chuang, Chi-ching; Fujiki, Mayo; Herman, Keith; Reinke, Wendy – Journal of Experimental Education, 2015
This study compared 5 scoring methods in terms of their statistical assumptions. They were then used to score the Teacher Observation of Classroom Adaptation Checklist, a measure consisting of 3 subscales and 21 Likert-type items. The 5 methods used were (a) sum/average scores of items, (b) latent factor scores with continuous indicators, (c)…
Descriptors: Scoring, Check Lists, Comparative Analysis, Differences
Wang, Ze; Rohrer, David; Chuang, Chi-ching; Fujiki, Mayo; Herman, Keith; Reinke, Wendy – Grantee Submission, 2015
This study compared 5 scoring methods in terms of their statistical assumptions. They were then used to score the Teacher Observation of Classroom Adaptation Checklist, a measure consisting of 3 subscales and 21 Likert-type items. The 5 methods used were (a) sum/average scores of items,(b) latent factor scores with continuous indicators, (c)…
Descriptors: Scoring, Check Lists, Differences, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Yuan, Ke-Hai; Zhang, Zhiyong – Psychometrika, 2012
The paper develops a two-stage robust procedure for structural equation modeling (SEM) and an R package "rsem" to facilitate the use of the procedure by applied researchers. In the first stage, M-estimates of the saturated mean vector and covariance matrix of all variables are obtained. Those corresponding to the substantive variables…
Descriptors: Structural Equation Models, Tests, Federal Aid, Psychometrics
Peer reviewed Peer reviewed
Yen, Wendy M. – Journal of Educational Measurement, 1984
A procedure for obtaining maximum likelihood trait estimates from number-correct (NC) scores for the three-parameter logistic model is presented. It produces an NC score to trait estimate conversion table. Analyses in the estimated true score metric confirm the conclusions made in the trait metric. (Author/DWH)
Descriptors: Achievement Tests, Error of Measurement, Estimation (Mathematics), Latent Trait Theory