NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 4 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Joyce M. W. Moonen-van Loon; Jeroen Donkers – Practical Assessment, Research & Evaluation, 2025
The reliability of assessment tools is critical for accurately monitoring student performance in various educational contexts. When multiple assessments are combined to form an overall evaluation, each assessment serves as a data point contributing to the student's performance within a broader educational framework. Determining composite…
Descriptors: Programming Languages, Reliability, Evaluation Methods, Student Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tan, Teck Kiang – Practical Assessment, Research & Evaluation, 2023
Researchers often have hypotheses concerning the state of affairs in the population from which they sampled their data to compare group means. The classical frequentist approach provides one way of carrying out hypothesis testing using ANOVA to state the null hypothesis that there is no difference in the means and proceed with multiple comparisons…
Descriptors: Comparative Analysis, Hypothesis Testing, Statistical Analysis, Guidelines
Peer reviewed Peer reviewed
Direct linkDirect link
Van Lissa, Caspar J.; van Erp, Sara; Clapper, Eli-Boaz – Research Synthesis Methods, 2023
When meta-analyzing heterogeneous bodies of literature, meta-regression can be used to account for potentially relevant between-studies differences. A key challenge is that the number of candidate moderators is often high relative to the number of studies. This introduces risks of overfitting, spurious results, and model non-convergence. To…
Descriptors: Bayesian Statistics, Regression (Statistics), Maximum Likelihood Statistics, Meta Analysis
Melissa G. Wolf; Daniel McNeish – Grantee Submission, 2023
To evaluate the fit of a confirmatory factor analysis model, researchers often rely on fit indices such as SRMR, RMSEA, and CFI. These indices are frequently compared to benchmark values of 0.08, 0.06, and 0.96, respectively, established by Hu and Bentler (1999). However, these indices are affected by model characteristics and their sensitivity to…
Descriptors: Programming Languages, Cutting Scores, Benchmarking, Factor Analysis