NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers2
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 181 to 195 of 331 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
DeMars, Christine E. – Applied Psychological Measurement, 2012
A testlet is a cluster of items that share a common passage, scenario, or other context. These items might measure something in common beyond the trait measured by the test as a whole; if so, the model for the item responses should allow for this testlet trait. But modeling testlet effects that are negligible makes the model unnecessarily…
Descriptors: Test Items, Item Response Theory, Comparative Analysis, Models
Xu, Yanbo; Mostow, Jack – International Educational Data Mining Society, 2012
A long-standing challenge for knowledge tracing is how to update estimates of multiple subskills that underlie a single observable step. We characterize approaches to this problem by how they model knowledge tracing, fit its parameters, predict performance, and update subskill estimates. Previous methods allocated blame or credit among subskills…
Descriptors: Teaching Methods, Comparative Analysis, Prediction, Mathematics
Peer reviewed Peer reviewed
Direct linkDirect link
Zhu, Xiaowen; Stone, Clement A. – Educational and Psychological Measurement, 2012
This study examined the relative effectiveness of Bayesian model comparison methods in selecting an appropriate graded response (GR) model for performance assessment applications. Three popular methods were considered: deviance information criterion (DIC), conditional predictive ordinate (CPO), and posterior predictive model checking (PPMC). Using…
Descriptors: Bayesian Statistics, Item Response Theory, Comparative Analysis, Models
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Stone, Clement A.; Tang, Yun – Practical Assessment, Research & Evaluation, 2013
Propensity score applications are often used to evaluate educational program impact. However, various options are available to estimate both propensity scores and construct comparison groups. This study used a student achievement dataset with commonly available covariates to compare different propensity scoring estimation methods (logistic…
Descriptors: Comparative Analysis, Probability, Sample Size, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Yang, Ji Seung; Hansen, Mark; Cai, Li – Educational and Psychological Measurement, 2012
Traditional estimators of item response theory scale scores ignore uncertainty carried over from the item calibration process, which can lead to incorrect estimates of the standard errors of measurement (SEMs). Here, the authors review a variety of approaches that have been applied to this problem and compare them on the basis of their statistical…
Descriptors: Item Response Theory, Scores, Statistical Analysis, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
de Rooij, Mark; Schouteden, Martijn – Multivariate Behavioral Research, 2012
Maximum likelihood estimation of mixed effect baseline category logit models for multinomial longitudinal data can be prohibitive due to the integral dimension of the random effects distribution. We propose to use multidimensional unfolding methodology to reduce the dimensionality of the problem. As a by-product, readily interpretable graphical…
Descriptors: Statistical Analysis, Longitudinal Studies, Data, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Kaplan, David; Chen, Jianshen – Psychometrika, 2012
A two-step Bayesian propensity score approach is introduced that incorporates prior information in the propensity score equation and outcome equation without the problems associated with simultaneous Bayesian propensity score approaches. The corresponding variance estimators are also provided. The two-step Bayesian propensity score is provided for…
Descriptors: Intervals, Bayesian Statistics, Scores, Prior Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Gelman, Andrew; Hill, Jennifer; Yajima, Masanao – Journal of Research on Educational Effectiveness, 2012
Applied researchers often find themselves making statistical inferences in settings that would seem to require multiple comparisons adjustments. We challenge the Type I error paradigm that underlies these corrections. Moreover we posit that the problem of multiple comparisons can disappear entirely when viewed from a hierarchical Bayesian…
Descriptors: Intervals, Comparative Analysis, Inferences, Error Patterns
Wu, Haiyan – ProQuest LLC, 2013
General diagnostic models (GDMs) and Bayesian networks are mathematical frameworks that cover a wide variety of psychometric models. Both extend latent class models, and while GDMs also extend item response theory (IRT) models, Bayesian networks can be parameterized using discretized IRT. The purpose of this study is to examine similarities and…
Descriptors: Comparative Analysis, Bayesian Statistics, Middle School Students, Mathematics
Peer reviewed Peer reviewed
Direct linkDirect link
Song, Xin-Yuan; Xia, Ye-Mao; Pan, Jun-Hao; Lee, Sik-Yum – Structural Equation Modeling: A Multidisciplinary Journal, 2011
Structural equation models have wide applications. One of the most important issues in analyzing structural equation models is model comparison. This article proposes a Bayesian model comparison statistic, namely the "L[subscript nu]"-measure for both semiparametric and parametric structural equation models. For illustration purposes, we consider…
Descriptors: Structural Equation Models, Bayesian Statistics, Comparative Analysis, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Lubke, Gitta – Measurement: Interdisciplinary Research and Perspectives, 2012
Von Davier et al. (this issue) describe two analyses that aim at determining whether the constructs measured with a number of observed items are categorical or continuous in nature. The issue of types versus traits has a long history and is relevant in many areas of behavioral research, including personality research, as emphasized by von Davier…
Descriptors: Models, Classification, Multivariate Analysis, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Griffiths, Thomas L.; Chater, Nick; Norris, Dennis; Pouget, Alexandre – Psychological Bulletin, 2012
Bowers and Davis (2012) criticize Bayesian modelers for telling "just so" stories about cognition and neuroscience. Their criticisms are weakened by not giving an accurate characterization of the motivation behind Bayesian modeling or the ways in which Bayesian models are used and by not evaluating this theoretical framework against specific…
Descriptors: Bayesian Statistics, Psychology, Brain, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Rindskopf, David – Psychological Methods, 2012
Muthen and Asparouhov (2012) made a strong case for the advantages of Bayesian methodology in factor analysis and structural equation models. I show additional extensions and adaptations of their methods and show how non-Bayesians can take advantage of many (though not all) of these advantages by using interval restrictions on parameters. By…
Descriptors: Structural Equation Models, Bayesian Statistics, Factor Analysis, Computation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Society for Research on Educational Effectiveness, 2013
One of the vexing problems in the analysis of SSD is in the assessment of the effect of intervention. Serial dependence notwithstanding, the linear model approach that has been advanced involves, in general, the fitting of regression lines (or curves) to the set of observations within each phase of the design and comparing the parameters of these…
Descriptors: Research Design, Effect Size, Intervention, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Ho, Tsung-Han; Dodd, Barbara G. – Applied Measurement in Education, 2012
In this study we compared five item selection procedures using three ability estimation methods in the context of a mixed-format adaptive test based on the generalized partial credit model. The item selection procedures used were maximum posterior weighted information, maximum expected information, maximum posterior weighted Kullback-Leibler…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Selection
Pages: 1  |  ...  |  9  |  10  |  11  |  12  |  13  |  14  |  15  |  16  |  17  |  ...  |  23