NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 174 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Michael Nagel; Lukas Fischer; Tim Pawlowski; Augustin Kelava – Structural Equation Modeling: A Multidisciplinary Journal, 2024
Bayesian estimations of complex regression models with high-dimensional parameter spaces require advanced priors, capable of addressing both sparsity and multicollinearity in the data. The Dirichlet-horseshoe, a new prior distribution that combines and expands on the concepts of the regularized horseshoe and the Dirichlet-Laplace priors, is a…
Descriptors: Bayesian Statistics, Regression (Statistics), Computation, Statistical Distributions
Peer reviewed Peer reviewed
Direct linkDirect link
Robert B. Olsen; Larry L. Orr; Stephen H. Bell; Elizabeth Petraglia; Elena Badillo-Goicoechea; Atsushi Miyaoka; Elizabeth A. Stuart – Journal of Research on Educational Effectiveness, 2024
Multi-site randomized controlled trials (RCTs) provide unbiased estimates of the average impact in the study sample. However, their ability to accurately predict the impact for individual sites outside the study sample, to inform local policy decisions, is largely unknown. To extend prior research on this question, we analyzed six multi-site RCTs…
Descriptors: Accuracy, Predictor Variables, Randomized Controlled Trials, Regression (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Cornelis Potgieter; Xin Qiao; Akihito Kamata; Yusuf Kara – Grantee Submission, 2024
As part of the effort to develop an improved oral reading fluency (ORF) assessment system, Kara et al. (2020) estimated the ORF scores based on a latent variable psychometric model of accuracy and speed for ORF data via a fully Bayesian approach. This study further investigates likelihood-based estimators for the model-derived ORF scores,…
Descriptors: Oral Reading, Reading Fluency, Scores, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Cornelis Potgieter; Xin Qiao; Akihito Kamata; Yusuf Kara – Journal of Educational Measurement, 2024
As part of the effort to develop an improved oral reading fluency (ORF) assessment system, Kara et al. estimated the ORF scores based on a latent variable psychometric model of accuracy and speed for ORF data via a fully Bayesian approach. This study further investigates likelihood-based estimators for the model-derived ORF scores, including…
Descriptors: Oral Reading, Reading Fluency, Scores, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Huan Liu; Won-Chan Lee – Journal of Educational Measurement, 2025
This study investigates the estimation of classification consistency and accuracy indices for composite summed and theta scores within the SS-MIRT framework, using five popular approaches, including the Lee, Rudner, Guo, Bayesian EAP, and Bayesian MCMC approaches. The procedures are illustrated through analysis of two real datasets and further…
Descriptors: Classification, Reliability, Accuracy, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Jean-Paul Fox – Journal of Educational and Behavioral Statistics, 2025
Popular item response theory (IRT) models are considered complex, mainly due to the inclusion of a random factor variable (latent variable). The random factor variable represents the incidental parameter problem since the number of parameters increases when including data of new persons. Therefore, IRT models require a specific estimation method…
Descriptors: Sample Size, Item Response Theory, Accuracy, Bayesian Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Remiro-Azócar, Antonio; Heath, Anna; Baio, Gianluca – Research Synthesis Methods, 2022
Population adjustment methods such as matching-adjusted indirect comparison (MAIC) are increasingly used to compare marginal treatment effects when there are cross-trial differences in effect modifiers and limited patient-level data. MAIC is based on propensity score weighting, which is sensitive to poor covariate overlap and cannot extrapolate…
Descriptors: Patients, Medical Research, Comparative Analysis, Outcomes of Treatment
McCluskey, Sydne – ProQuest LLC, 2023
Rater comparison analysis is commonly necessary in the social sciences. Conventional approaches to the problem generally focus on calculation of agreement statistics, which provide useful but incomplete information about rater agreement. Importantly, one-number agreement statistics give no indication regarding the nature of disagreements, nor do…
Descriptors: Bayesian Statistics, Structural Equation Models, Interrater Reliability, Beliefs
Peer reviewed Peer reviewed
Direct linkDirect link
Gonzalez, Oscar – Educational and Psychological Measurement, 2023
When scores are used to make decisions about respondents, it is of interest to estimate classification accuracy (CA), the probability of making a correct decision, and classification consistency (CC), the probability of making the same decision across two parallel administrations of the measure. Model-based estimates of CA and CC computed from the…
Descriptors: Classification, Accuracy, Intervals, Probability
Peer reviewed Peer reviewed
Direct linkDirect link
A. M. Sadek; Fahad Al-Muhlaki – Measurement: Interdisciplinary Research and Perspectives, 2024
In this study, the accuracy of the artificial neural network (ANN) was assessed considering the uncertainties associated with the randomness of the data and the lack of learning. The Monte-Carlo algorithm was applied to simulate the randomness of the input variables and evaluate the output distribution. It has been shown that under certain…
Descriptors: Monte Carlo Methods, Accuracy, Artificial Intelligence, Guidelines
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kilic, Abdullah Faruk; Dogan, Nuri – International Journal of Assessment Tools in Education, 2021
Weighted least squares (WLS), weighted least squares mean-and-variance-adjusted (WLSMV), unweighted least squares mean-and-variance-adjusted (ULSMV), maximum likelihood (ML), robust maximum likelihood (MLR) and Bayesian estimation methods were compared in mixed item response type data via Monte Carlo simulation. The percentage of polytomous items,…
Descriptors: Factor Analysis, Computation, Least Squares Statistics, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Mangino, Anthony A.; Smith, Kendall A.; Finch, W. Holmes; Hernández-Finch, Maria E. – Measurement and Evaluation in Counseling and Development, 2022
A number of machine learning methods can be employed in the prediction of suicide attempts. However, many models do not predict new cases well in cases with unbalanced data. The present study improved prediction of suicide attempts via the use of a generative adversarial network.
Descriptors: Prediction, Suicide, Artificial Intelligence, Networks
Peer reviewed Peer reviewed
Direct linkDirect link
de Jong, Valentijn M. T.; Campbell, Harlan; Maxwell, Lauren; Jaenisch, Thomas; Gustafson, Paul; Debray, Thomas P. A. – Research Synthesis Methods, 2023
A common problem in the analysis of multiple data sources, including individual participant data meta-analysis (IPD-MA), is the misclassification of binary variables. Misclassification may lead to biased estimators of model parameters, even when the misclassification is entirely random. We aimed to develop statistical methods that facilitate…
Descriptors: Classification, Meta Analysis, Bayesian Statistics, Evaluation Methods
Huan Liu – ProQuest LLC, 2024
In many large-scale testing programs, examinees are frequently categorized into different performance levels. These classifications are then used to make high-stakes decisions about examinees in contexts such as in licensure, certification, and educational assessments. Numerous approaches to estimating the consistency and accuracy of this…
Descriptors: Classification, Accuracy, Item Response Theory, Decision Making
Peer reviewed Peer reviewed
Direct linkDirect link
Meng Qiu; Ke-Hai Yuan – Structural Equation Modeling: A Multidisciplinary Journal, 2024
Latent class analysis (LCA) is a widely used technique for detecting unobserved population heterogeneity in cross-sectional data. Despite its popularity, the performance of LCA is not well understood. In this study, we evaluate the performance of LCA with binary data by examining classification accuracy, parameter estimation accuracy, and coverage…
Descriptors: Classification, Sample Size, Monte Carlo Methods, Social Science Research
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  12