Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 0 |
| Since 2007 (last 20 years) | 5 |
Descriptor
| Comparative Analysis | 5 |
| Computation | 5 |
| Accuracy | 4 |
| Statistical Analysis | 3 |
| Adaptive Testing | 2 |
| Bayesian Statistics | 2 |
| Data Analysis | 2 |
| Difficulty Level | 2 |
| Item Response Theory | 2 |
| Regression (Statistics) | 2 |
| Sample Size | 2 |
| More ▼ | |
Source
| ETS Research Report Series | 1 |
| Educational Testing Service | 1 |
| Educational and Psychological… | 1 |
| Journal of Educational… | 1 |
| Journal of Educational and… | 1 |
Author
| Moses, Tim | 5 |
| Kim, Sooyeon | 2 |
| Miao, Jing | 2 |
| Dorans, Neil | 1 |
| Dorans, Neil J. | 1 |
| Yoo, Hanwook | 1 |
| Yoo, Hanwook Henry | 1 |
Publication Type
| Journal Articles | 4 |
| Reports - Research | 3 |
| Reports - Evaluative | 2 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Kim, Sooyeon; Moses, Tim; Yoo, Hanwook – Journal of Educational Measurement, 2015
This inquiry is an investigation of item response theory (IRT) proficiency estimators' accuracy under multistage testing (MST). We chose a two-stage MST design that includes four modules (one at Stage 1, three at Stage 2) and three difficulty paths (low, middle, high). We assembled various two-stage MST panels (i.e., forms) by manipulating two…
Descriptors: Comparative Analysis, Item Response Theory, Computation, Accuracy
Moses, Tim – Educational and Psychological Measurement, 2014
In this study, smoothing and scaling approaches are compared for estimating subscore-to-composite scaling results involving composites computed as rounded and weighted combinations of subscores. The considered smoothing and scaling approaches included those based on raw data, on smoothing the bivariate distribution of the subscores, on smoothing…
Descriptors: Weighted Scores, Scaling, Data Analysis, Comparative Analysis
Kim, Sooyeon; Moses, Tim; Yoo, Hanwook Henry – ETS Research Report Series, 2015
The purpose of this inquiry was to investigate the effectiveness of item response theory (IRT) proficiency estimators in terms of estimation bias and error under multistage testing (MST). We chose a 2-stage MST design in which 1 adaptation to the examinees' ability levels takes place. It includes 4 modules (1 at Stage 1, 3 at Stage 2) and 3 paths…
Descriptors: Item Response Theory, Computation, Statistical Bias, Error of Measurement
Moses, Tim; Miao, Jing; Dorans, Neil J. – Journal of Educational and Behavioral Statistics, 2010
In this study, the accuracies of four strategies were compared for estimating conditional differential item functioning (DIF), including raw data, logistic regression, log-linear models, and kernel smoothing. Real data simulations were used to evaluate the estimation strategies across six items, DIF and No DIF situations, and four sample size…
Descriptors: Test Bias, Statistical Analysis, Computation, Comparative Analysis
Moses, Tim; Miao, Jing; Dorans, Neil – Educational Testing Service, 2010
This study compared the accuracies of four differential item functioning (DIF) estimation methods, where each method makes use of only one of the following: raw data, logistic regression, loglinear models, or kernel smoothing. The major focus was on the estimation strategies' potential for estimating score-level, conditional DIF. A secondary focus…
Descriptors: Test Bias, Statistical Analysis, Computation, Scores

Peer reviewed
Direct link
