Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 10 |
| Since 2017 (last 10 years) | 48 |
| Since 2007 (last 20 years) | 163 |
Descriptor
Source
Author
| Ranger, Jochen | 4 |
| Jeon, Minjeong | 3 |
| Kuhn, Jörg-Tobias | 3 |
| Marcoulides, George A. | 3 |
| Rabe-Hesketh, Sophia | 3 |
| Raykov, Tenko | 3 |
| Rijmen, Frank | 3 |
| de la Torre, Jimmy | 3 |
| von Davier, Matthias | 3 |
| Amanda Goodwin | 2 |
| Antal, Tamás | 2 |
| More ▼ | |
Publication Type
Education Level
Audience
| Researchers | 5 |
| Practitioners | 2 |
Location
| Australia | 7 |
| Netherlands | 4 |
| Canada | 3 |
| Italy | 3 |
| Turkey | 3 |
| Austria | 2 |
| China | 2 |
| Denmark | 2 |
| Japan | 2 |
| Malaysia | 2 |
| South Korea | 2 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Hans-Peter Piepho; Johannes Forkman; Waqas Ahmed Malik – Research Synthesis Methods, 2024
Checking for possible inconsistency between direct and indirect evidence is an important task in network meta-analysis. Recently, an evidence-splitting (ES) model has been proposed, that allows separating direct and indirect evidence in a network and hence assessing inconsistency. A salient feature of this model is that the variance for…
Descriptors: Maximum Likelihood Statistics, Evidence, Networks, Meta Analysis
Doran, Harold – Journal of Educational and Behavioral Statistics, 2023
This article is concerned with a subset of numerically stable and scalable algorithms useful to support computationally complex psychometric models in the era of machine learning and massive data. The subset selected here is a core set of numerical methods that should be familiar to computational psychometricians and considers whitening transforms…
Descriptors: Scaling, Algorithms, Psychometrics, Computation
Sideridis, Georgios D.; Jaffari, Fathima – Measurement and Evaluation in Counseling and Development, 2022
The utility of the maximum likelihood F-test was demonstrated as an alternative to the omnibus Chi-square test when evaluating model fit in confirmatory factor analysis with small samples, as it has been well documented that the likelihood ratio test (T[subscript ML]) with small samples is not Chi-square distributed.
Descriptors: Maximum Likelihood Statistics, Factor Analysis, Alternative Assessment, Sample Size
Viechtbauer, Wolfgang; López-López, José Antonio – Research Synthesis Methods, 2022
Heterogeneity is commonplace in meta-analysis. When heterogeneity is found, researchers often aim to identify predictors that account for at least part of such heterogeneity by using mixed-effects meta-regression models. Another potentially relevant goal is to focus on the amount of heterogeneity as a function of one or more predictors, but this…
Descriptors: Meta Analysis, Models, Predictor Variables, Computation
Mostafa Hosseinzadeh; Ki Lynn Matlock Cole – Educational and Psychological Measurement, 2024
In real-world situations, multidimensional data may appear on large-scale tests or psychological surveys. The purpose of this study was to investigate the effects of the quantity and magnitude of cross-loadings and model specification on item parameter recovery in multidimensional Item Response Theory (MIRT) models, especially when the model was…
Descriptors: Item Response Theory, Models, Maximum Likelihood Statistics, Algorithms
Cornelis Potgieter; Xin Qiao; Akihito Kamata; Yusuf Kara – Journal of Educational Measurement, 2024
As part of the effort to develop an improved oral reading fluency (ORF) assessment system, Kara et al. estimated the ORF scores based on a latent variable psychometric model of accuracy and speed for ORF data via a fully Bayesian approach. This study further investigates likelihood-based estimators for the model-derived ORF scores, including…
Descriptors: Oral Reading, Reading Fluency, Scores, Psychometrics
Sun-Joo Cho; Amanda Goodwin; Matthew Naveiras; Paul De Boeck – Journal of Educational Measurement, 2024
Explanatory item response models (EIRMs) have been applied to investigate the effects of person covariates, item covariates, and their interactions in the fields of reading education and psycholinguistics. In practice, it is often assumed that the relationships between the covariates and the logit transformation of item response probability are…
Descriptors: Item Response Theory, Test Items, Models, Maximum Likelihood Statistics
Sun-Joo Cho; Amanda Goodwin; Matthew Naveiras; Jorge Salas – Journal of Educational Measurement, 2024
Despite the growing interest in incorporating response time data into item response models, there has been a lack of research investigating how the effect of speed on the probability of a correct response varies across different groups (e.g., experimental conditions) for various items (i.e., differential response time item analysis). Furthermore,…
Descriptors: Item Response Theory, Reaction Time, Models, Accuracy
Kim, Su-Young; Huh, David; Zhou, Zhengyang; Mun, Eun-Young – International Journal of Behavioral Development, 2020
Latent growth models (LGMs) are an application of structural equation modeling and frequently used in developmental and clinical research to analyze change over time in longitudinal outcomes. Maximum likelihood (ML), the most common approach for estimating LGMs, can fail to converge or may produce biased estimates in complex LGMs especially in…
Descriptors: Bayesian Statistics, Maximum Likelihood Statistics, Longitudinal Studies, Models
Ranger, Jochen; Kuhn, Jörg-Tobias; Wolgast, Anett – Journal of Educational Measurement, 2021
Van der Linden's hierarchical model for responses and response times can be used in order to infer the ability and mental speed of test takers from their responses and response times in an educational test. A standard approach for this is maximum likelihood estimation. In real-world applications, the data of some test takers might be partly…
Descriptors: Models, Reaction Time, Item Response Theory, Tests
Karadavut, Tugba – Applied Measurement in Education, 2021
Mixture IRT models address the heterogeneity in a population by extracting latent classes and allowing item parameters to vary between latent classes. Once the latent classes are extracted, they need to be further examined to be characterized. Some approaches have been adopted in the literature for this purpose. These approaches examine either the…
Descriptors: Item Response Theory, Models, Test Items, Maximum Likelihood Statistics
Daniel Kasper; Katrin Schulz-Heidorf; Knut Schwippert – Sociological Methods & Research, 2024
In this article, we extend Liao's test for across-group comparisons of the fixed effects from the generalized linear model to the fixed and random effects of the generalized linear mixed model (GLMM). Using as our basis the Wald statistic, we developed an asymptotic test statistic for across-group comparisons of these effects. The test can be…
Descriptors: Models, Achievement Tests, Foreign Countries, International Assessment
Mulder, J.; Raftery, A. E. – Sociological Methods & Research, 2022
The Schwarz or Bayesian information criterion (BIC) is one of the most widely used tools for model comparison in social science research. The BIC, however, is not suitable for evaluating models with order constraints on the parameters of interest. This article explores two extensions of the BIC for evaluating order-constrained models, one where a…
Descriptors: Models, Social Science Research, Programming Languages, Bayesian Statistics
Cai, Tianji; Xia, Yiwei; Zhou, Yisu – Sociological Methods & Research, 2021
Analysts of discrete data often face the challenge of managing the tendency of inflation on certain values. When treated improperly, such phenomenon may lead to biased estimates and incorrect inferences. This study extends the existing literature on single-value inflated models and develops a general framework to handle variables with more than…
Descriptors: Statistical Distributions, Probability, Statistical Analysis, Statistical Bias
Ippel, Lianne; Magis, David – Educational and Psychological Measurement, 2020
In dichotomous item response theory (IRT) framework, the asymptotic standard error (ASE) is the most common statistic to evaluate the precision of various ability estimators. Easy-to-use ASE formulas are readily available; however, the accuracy of some of these formulas was recently questioned and new ASE formulas were derived from a general…
Descriptors: Item Response Theory, Error of Measurement, Accuracy, Standards

Peer reviewed
Direct link
