Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 1 |
| Since 2007 (last 20 years) | 5 |
Descriptor
| Computation | 5 |
| Simulation | 5 |
| Item Response Theory | 3 |
| Test Items | 3 |
| Achievement Tests | 2 |
| Comparative Analysis | 2 |
| Computer Assisted Testing | 2 |
| Evaluation Methods | 2 |
| Foreign Countries | 2 |
| Monte Carlo Methods | 2 |
| Probability | 2 |
| More ▼ | |
Source
| International Journal of… | 5 |
Author
| Mapuranga, Raymond | 1 |
| Rutkowski, David | 1 |
| Rutkowski, Leslie | 1 |
| Sass, D. A. | 1 |
| Schmitt, T. A. | 1 |
| Sen, Sedat | 1 |
| Sullivan, J. R. | 1 |
| Veldkamp, Bernard P. | 1 |
| Walker, C. M. | 1 |
| Wyse, Adam E. | 1 |
| Zhou, Yan | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 5 |
| Reports - Evaluative | 3 |
| Reports - Research | 2 |
Education Level
| Grade 4 | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
| Program for International… | 2 |
| Trends in International… | 1 |
What Works Clearinghouse Rating
Sen, Sedat – International Journal of Testing, 2018
Recent research has shown that over-extraction of latent classes can be observed in the Bayesian estimation of the mixed Rasch model when the distribution of ability is non-normal. This study examined the effect of non-normal ability distributions on the number of latent classes in the mixed Rasch model when estimated with maximum likelihood…
Descriptors: Item Response Theory, Comparative Analysis, Computation, Maximum Likelihood Statistics
Rutkowski, Leslie; Rutkowski, David; Zhou, Yan – International Journal of Testing, 2016
Using an empirically-based simulation study, we show that typically used methods of choosing an item calibration sample have significant impacts on achievement bias and system rankings. We examine whether recent PISA accommodations, especially for lower performing participants, can mitigate some of this bias. Our findings indicate that standard…
Descriptors: Simulation, International Programs, Adolescents, Student Evaluation
Schmitt, T. A.; Sass, D. A.; Sullivan, J. R.; Walker, C. M. – International Journal of Testing, 2010
Imposed time limits on computer adaptive tests (CATs) can result in examinees having difficulty completing all items, thus compromising the validity and reliability of ability estimates. In this study, the effects of speededness were explored in a simulated CAT environment by varying examinee response patterns to end-of-test items. Expectedly,…
Descriptors: Monte Carlo Methods, Simulation, Computer Assisted Testing, Adaptive Testing
Wyse, Adam E.; Mapuranga, Raymond – International Journal of Testing, 2009
Differential item functioning (DIF) analysis is a statistical technique used for ensuring the equity and fairness of educational assessments. This study formulates a new DIF analysis method using the information similarity index (ISI). ISI compares item information functions when data fits the Rasch model. Through simulations and an international…
Descriptors: Test Bias, Evaluation Methods, Test Items, Educational Assessment
Veldkamp, Bernard P. – International Journal of Testing, 2008
Integrity[TM], an online application for testing both the statistical integrity of the test and the academic integrity of the examinees, was evaluated for this review. Program features and the program output are described. An overview of the statistics in Integrity[TM] is provided, and the application is illustrated with a small simulation study.…
Descriptors: Simulation, Integrity, Statistics, Computer Assisted Testing

Peer reviewed
Direct link
