Publication Date
In 2025 | 1 |
Since 2024 | 3 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 13 |
Since 2006 (last 20 years) | 16 |
Descriptor
Correlation | 16 |
Effect Size | 16 |
Randomized Controlled Trials | 16 |
Statistical Analysis | 8 |
Research Design | 6 |
Sample Size | 6 |
Comparative Analysis | 5 |
Intervention | 4 |
Multivariate Analysis | 4 |
Accuracy | 3 |
Experimental Groups | 3 |
More ▼ |
Source
Author
Publication Type
Reports - Research | 15 |
Journal Articles | 12 |
Information Analyses | 2 |
Reports - Evaluative | 1 |
Education Level
Audience
Location
Maryland | 1 |
Missouri | 1 |
North Carolina | 1 |
Texas | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Early Childhood Longitudinal… | 1 |
What Works Clearinghouse Rating
Timo Gnambs; Ulrich Schroeders – Research Synthesis Methods, 2024
Meta-analyses of treatment effects in randomized control trials are often faced with the problem of missing information required to calculate effect sizes and their sampling variances. Particularly, correlations between pre- and posttest scores are frequently not available. As an ad-hoc solution, researchers impute a constant value for the missing…
Descriptors: Accuracy, Meta Analysis, Randomized Controlled Trials, Effect Size
Eric C. Hedberg – Grantee Submission, 2023
In cluster randomized evaluations, a treatment or intervention is randomly assigned to a set of clusters each with constituent individual units of observations (e.g., student units that attend schools, which are assigned to treatment). One consideration of these designs is how many units are needed per cluster to achieve adequate statistical…
Descriptors: Statistical Analysis, Multivariate Analysis, Randomized Controlled Trials, Research Design
E. C. Hedberg – American Journal of Evaluation, 2023
In cluster randomized evaluations, a treatment or intervention is randomly assigned to a set of clusters each with constituent individual units of observations (e.g., student units that attend schools, which are assigned to treatment). One consideration of these designs is how many units are needed per cluster to achieve adequate statistical…
Descriptors: Statistical Analysis, Multivariate Analysis, Randomized Controlled Trials, Research Design
Joshua B. Gilbert; Luke W. Miratrix; Mridul Joshi; Benjamin W. Domingue – Journal of Educational and Behavioral Statistics, 2025
Analyzing heterogeneous treatment effects (HTEs) plays a crucial role in understanding the impacts of educational interventions. A standard practice for HTE analysis is to examine interactions between treatment status and preintervention participant characteristics, such as pretest scores, to identify how different groups respond to treatment.…
Descriptors: Causal Models, Item Response Theory, Statistical Inference, Psychometrics
Joshua B. Gilbert; Luke W. Miratrix; Mridul Joshi; Benjamin W. Domingue – Annenberg Institute for School Reform at Brown University, 2024
Analyzing heterogeneous treatment effects (HTE) plays a crucial role in understanding the impacts of educational interventions. A standard practice for HTE analysis is to examine interactions between treatment status and pre-intervention participant characteristics, such as pretest scores, to identify how different groups respond to treatment.…
Descriptors: Causal Models, Item Response Theory, Statistical Inference, Psychometrics
Onslow, Mark; Jones, Mark; O'Brian, Sue; Packman, Ann; Menzies, Ross; Lowe, Robyn; Arnott, Simone; Bridgman, Kate; de Sonneville, Caroline; Franken, Marie-Christine – Journal of Speech, Language, and Hearing Research, 2018
Purpose: This report investigates whether parent-reported stuttering severity ratings (SRs) provide similar estimates of effect size as percentage of syllables stuttered (%SS) for randomized trials of early stuttering treatment with preschool children. Method: Data sets from 3 randomized controlled trials of an early stuttering intervention were…
Descriptors: Stuttering, Severity (of Disability), Effect Size, Preschool Children
Rhoads, Christopher – Society for Research on Educational Effectiveness, 2016
Current practice for conducting power analyses in hierarchical trials using survey based ICC and effect size estimates may be misestimating power because ICCs are not being adjusted to account for treatment effect heterogeneity. Results presented in Table 1 show that the necessary adjustments can be quite large or quite small. Furthermore, power…
Descriptors: Statistical Analysis, Correlation, Effect Size, Surveys
Harrison, Sean; Jones, Hayley E.; Martin, Richard M.; Lewis, Sarah J.; Higgins, Julian P. T. – Research Synthesis Methods, 2017
Meta-analyses combine the results of multiple studies of a common question. Approaches based on effect size estimates from each study are generally regarded as the most informative. However, these methods can only be used if comparable effect sizes can be computed from each study, and this may not be the case due to variation in how the studies…
Descriptors: Meta Analysis, Sample Size, Effect Size, Comparative Analysis
Kelcey, Ben; Spybrook, Jessaca; Phelps, Geoffrey; Jones, Nathan; Zhang, Jiaqi – Journal of Experimental Education, 2017
We develop a theoretical and empirical basis for the design of teacher professional development studies. We build on previous work by (a) developing estimates of intraclass correlation coefficients for teacher outcomes using two- and three-level data structures, (b) developing estimates of the variance explained by covariates, and (c) modifying…
Descriptors: Faculty Development, Research Design, Teacher Effectiveness, Correlation
Byun, Tara McAllister – Journal of Speech, Language, and Hearing Research, 2017
Purpose: This study documented the efficacy of visual-acoustic biofeedback intervention for residual rhotic errors, relative to a comparison condition involving traditional articulatory treatment. All participants received both treatments in a single-subject experimental design featuring alternating treatments with blocked randomization of…
Descriptors: Biofeedback, Intervention, Randomized Controlled Trials, Comparative Analysis
Dong, Nianbo; Reinke, Wendy M.; Herman, Keith C.; Bradshaw, Catherine P.; Murray, Desiree W. – Society for Research on Educational Effectiveness, 2015
Cluster randomized experiments are now widely used to examine intervention effects in prevention science. It is meaningful to use empirical benchmarks for interpreting effect size in prevention science. The effect size (i.e., the standardized mean difference, calculated by the difference of the means between the treatment and control groups,…
Descriptors: Effect Size, Correlation, Multivariate Analysis, Statistical Analysis
E. C. Hedberg – Grantee Submission, 2016
Background: There is an increased focus on randomized trials for proximal behavioral outcomes in early childhood research. However, planning sample sizes for such designs requires extant information on the size of effect, variance decomposition, and effectiveness of covariates. Objectives: The purpose of this article is to employ a recent large…
Descriptors: Randomized Controlled Trials, Kindergarten, Children, Longitudinal Studies
Rhoads, Christopher – Journal of Research on Educational Effectiveness, 2016
Experimental evaluations that involve the educational system usually involve a hierarchical structure (students are nested within classrooms that are nested within schools, etc.). Concerns about contamination, where research subjects receive certain features of an intervention intended for subjects in a different experimental group, have often led…
Descriptors: Educational Experiments, Error of Measurement, Research Design, Statistical Analysis
Cheung, Alan C. K.; Slavin, Robert E. – Educational Researcher, 2016
As evidence becomes increasingly important in educational policy, it is essential to understand how research design might contribute to reported effect sizes in experiments evaluating educational programs. A total of 645 studies from 12 recent reviews of evaluations of preschool, reading, mathematics, and science programs were studied. Effect…
Descriptors: Effect Size, Research Methodology, Research Design, Preschool Evaluation
Robertson, Clare; Ramsay, Craig; Gurung, Tara; Mowatt, Graham; Pickard, Robert; Sharma, Pawana – Research Synthesis Methods, 2014
We describe our experience of using a modified version of the Cochrane risk of bias (RoB) tool for randomised and non-randomised comparative studies. Objectives: (1) To assess time to complete RoB assessment; (2) To assess inter-rater agreement; and (3) To explore the association between RoB and treatment effect size. Methods: Cochrane risk of…
Descriptors: Risk, Randomized Controlled Trials, Research Design, Comparative Analysis
Previous Page | Next Page ยป
Pages: 1 | 2