NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20260
Since 20250
Since 2022 (last 5 years)0
Since 2017 (last 10 years)2
Since 2007 (last 20 years)9
Laws, Policies, & Programs
Assessments and Surveys
Program for International…1
What Works Clearinghouse Rating
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Aybek, Eren Can; Demirtasli, R. Nukhet – International Journal of Research in Education and Science, 2017
This article aims to provide a theoretical framework for computerized adaptive tests (CAT) and item response theory models for polytomous items. Besides that, it aims to introduce the simulation and live CAT software to the related researchers. Computerized adaptive test algorithm, assumptions of item response theory models, nominal response…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Yang, Ji Seung; Zheng, Xiaying – Journal of Educational and Behavioral Statistics, 2018
The purpose of this article is to introduce and review the capability and performance of the Stata item response theory (IRT) package that is available from Stata v.14, 2015. Using a simulated data set and a publicly available item response data set extracted from Programme of International Student Assessment, we review the IRT package from…
Descriptors: Item Response Theory, Item Analysis, Computer Software, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Chiu, Chia-Yi; Köhn, Hans-Friedrich; Wu, Huey-Min – International Journal of Testing, 2016
The Reduced Reparameterized Unified Model (Reduced RUM) is a diagnostic classification model for educational assessment that has received considerable attention among psychometricians. However, the computational options for researchers and practitioners who wish to use the Reduced RUM in their work, but do not feel comfortable writing their own…
Descriptors: Educational Diagnosis, Classification, Models, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
McNeish, Daniel M. – Journal of Educational and Behavioral Statistics, 2016
Mixed-effects models (MEMs) and latent growth models (LGMs) are often considered interchangeable save the discipline-specific nomenclature. Software implementations of these models, however, are not interchangeable, particularly with small sample sizes. Restricted maximum likelihood estimation that mitigates small sample bias in MEMs has not been…
Descriptors: Models, Statistical Analysis, Hierarchical Linear Modeling, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Broatch, Jennifer; Lohr, Sharon – Journal of Educational and Behavioral Statistics, 2012
Measuring teacher effectiveness is challenging since no direct estimate exists; teacher effectiveness can be measured only indirectly through student responses. Traditional value-added assessment (VAA) models generally attempt to estimate the value that an individual teacher adds to students' knowledge as measured by scores on successive…
Descriptors: Teacher Effectiveness, Models, Maximum Likelihood Statistics, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Sterba, Sonya K.; Pek, Jolynn – Psychological Methods, 2012
Researchers in psychology are increasingly using model selection strategies to decide among competing models, rather than evaluating the fit of a given model in isolation. However, such interest in model selection outpaces an awareness that one or a few cases can have disproportionate impact on the model ranking. Though case influence on the fit…
Descriptors: Psychological Studies, Models, Selection, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Jiao, Hong; Wang, Shudong; He, Wei – Journal of Educational Measurement, 2013
This study demonstrated the equivalence between the Rasch testlet model and the three-level one-parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from the MCMC method was compared with those from the marginalized maximum likelihood estimation (MMLE)…
Descriptors: Computation, Item Response Theory, Models, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Jeon, Minjeong; Rabe-Hesketh, Sophia – Journal of Educational and Behavioral Statistics, 2012
In this article, the authors suggest a profile-likelihood approach for estimating complex models by maximum likelihood (ML) using standard software and minimal programming. The method works whenever setting some of the parameters of the model to known constants turns the model into a standard model. An important class of models that can be…
Descriptors: Maximum Likelihood Statistics, Computation, Models, Factor Structure
Peer reviewed Peer reviewed
Direct linkDirect link
Natesan, Prathiba; Limbers, Christine; Varni, James W. – Educational and Psychological Measurement, 2010
The present study presents the formulation of graded response models in the multilevel framework (as nonlinear mixed models) and demonstrates their use in estimating item parameters and investigating the group-level effects for specific covariates using Bayesian estimation. The graded response multilevel model (GRMM) combines the formulation of…
Descriptors: Bayesian Statistics, Computation, Psychometrics, Item Response Theory
Peer reviewed Peer reviewed
de Leeuw, Jan; Kreft, Ita G. G. – Journal of Educational and Behavioral Statistics, 1995
Practical problems with multilevel techniques are discussed. These problems relate to terminology, computer programs employing different algorithms, and interpretations of the coefficients in either one or two steps. The usefulness of hierarchical linear models (HLMs) in common situations in educational research is explored. While elegant, HLMs…
Descriptors: Algorithms, Computer Software, Definitions, Educational Research