NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Batley, Prathiba Natesan; Minka, Tom; Hedges, Larry Vernon – Grantee Submission, 2020
Immediacy is one of the necessary criteria to show strong evidence of treatment effect in single case experimental designs (SCEDs). With the exception of Natesan and Hedges (2017) no inferential statistical tool has been used to demonstrate or quantify it until now. We investigate and quantify immediacy by treating the change-points between the…
Descriptors: Bayesian Statistics, Monte Carlo Methods, Statistical Inference, Markov Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Tongyun; Jiao, Hong; Macready, George B. – Educational and Psychological Measurement, 2016
The present study investigates different approaches to adding covariates and the impact in fitting mixture item response theory models. Mixture item response theory models serve as an important methodology for tackling several psychometric issues in test development, including the detection of latent differential item functioning. A Monte Carlo…
Descriptors: Item Response Theory, Psychometrics, Test Construction, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Belland, Brian R.; Walker, Andrew E.; Kim, Nam Ju – Review of Educational Research, 2017
Computer-based scaffolding provides temporary support that enables students to participate in and become more proficient at complex skills like problem solving, argumentation, and evaluation. While meta-analyses have addressed between-subject differences on cognitive outcomes resulting from scaffolding, none has addressed within-subject gains.…
Descriptors: Bayesian Statistics, Meta Analysis, STEM Education, Computer Assisted Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Dai, Yunyun – Applied Psychological Measurement, 2013
Mixtures of item response theory (IRT) models have been proposed as a technique to explore response patterns in test data related to cognitive strategies, instructional sensitivity, and differential item functioning (DIF). Estimation proves challenging due to difficulties in identification and questions of effect size needed to recover underlying…
Descriptors: Item Response Theory, Test Bias, Computation, Bayesian Statistics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Society for Research on Educational Effectiveness, 2013
One of the vexing problems in the analysis of SSD is in the assessment of the effect of intervention. Serial dependence notwithstanding, the linear model approach that has been advanced involves, in general, the fitting of regression lines (or curves) to the set of observations within each phase of the design and comparing the parameters of these…
Descriptors: Research Design, Effect Size, Intervention, Statistical Analysis
Rindskopf, David; Shadish, William; Hedges, Larry – Society for Research on Educational Effectiveness, 2012
Data from single case designs (SCDs) have traditionally been analyzed by visual inspection rather than statistical models. As a consequence, effect sizes have been of little interest. Lately, some effect-size estimators have been proposed, but most are either (i) nonparametric, and/or (ii) based on an analogy incompatible with effect sizes from…
Descriptors: Intervention, Effect Size, Bayesian Statistics, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Cheung, Shu Fai; Chan, Darius K.-S. – Educational and Psychological Measurement, 2008
In meta-analysis, it is common to have dependent effect sizes, such as several effect sizes from the same sample but measured at different times. Cheung and Chan proposed the adjusted-individual and adjusted-weighted procedures to estimate the degree of dependence and incorporate this estimate in the meta-analysis. The present study extends the…
Descriptors: Effect Size, Academic Achievement, Meta Analysis, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Herzog, Walter; Boomsma, Anne; Reinecke, Sven – Structural Equation Modeling: A Multidisciplinary Journal, 2007
According to Kenny and McCoach (2003), chi-square tests of structural equation models produce inflated Type I error rates when the degrees of freedom increase. So far, the amount of this bias in large models has not been quantified. In a Monte Carlo study of confirmatory factor models with a range of 48 to 960 degrees of freedom it was found that…
Descriptors: Monte Carlo Methods, Structural Equation Models, Effect Size, Maximum Likelihood Statistics