Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 6 |
| Since 2017 (last 10 years) | 11 |
Descriptor
Source
Author
| Bradshaw, Catherine P. | 2 |
| James S. Kim | 2 |
| Joshua B. Gilbert | 2 |
| Konold, Timothy R. | 2 |
| Kush, Joseph M. | 2 |
| Luke W. Miratrix | 2 |
| Ben Kelcey | 1 |
| Benjamin Kelcey | 1 |
| Cox, Kyle | 1 |
| Deke, John | 1 |
| Dong, Nianbo | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 9 |
| Journal Articles | 8 |
| Information Analyses | 1 |
| Numerical/Quantitative Data | 1 |
| Reports - Descriptive | 1 |
Education Level
| Early Childhood Education | 2 |
| Elementary Education | 2 |
| Grade 1 | 2 |
| Grade 2 | 2 |
| Grade 3 | 2 |
| Primary Education | 2 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Nianbo Dong; Benjamin Kelcey; Jessaca Spybrook – Journal of Experimental Education, 2024
Multisite cluster randomized trials (MCRTs), in which, the intermediate-level clusters (e.g., classrooms) are randomly assigned to the treatment or control condition within each site (e.g., school), are among the most commonly used experimental designs across a broad range of disciplines. MCRTs often align with the theory that programs are…
Descriptors: Research Design, Randomized Controlled Trials, Statistical Analysis, Sample Size
Kush, Joseph M.; Konold, Timothy R.; Bradshaw, Catherine P. – Journal of Experimental Education, 2022
In two-level designs, the total sample is a function of both the number of Level 2 clusters and the average number of Level 1 units per cluster. Traditional multilevel power calculations rely on either the arithmetic average or the harmonic mean when estimating the average number of Level 1 units across clusters of unbalanced size. The current…
Descriptors: Multivariate Analysis, Randomized Controlled Trials, Monte Carlo Methods, Sample Size
Kyle Cox; Ben Kelcey; Hannah Luce – Journal of Experimental Education, 2024
Comprehensive evaluation of treatment effects is aided by considerations for moderated effects. In educational research, the combination of natural hierarchical structures and prevalence of group-administered or shared facilitator treatments often produces three-level partially nested data structures. Literature details planning strategies for a…
Descriptors: Randomized Controlled Trials, Monte Carlo Methods, Hierarchical Linear Modeling, Educational Research
Qi, Hongchao; Rizopoulos, Dimitris; Rosmalen, Joost – Research Synthesis Methods, 2023
The meta-analytic-predictive (MAP) approach is a Bayesian method to incorporate historical controls in new trials that aims to increase the statistical power and reduce the required sample size. Here we investigate how to calculate the sample size of the new trial when historical data is available, and the MAP approach is used in the analysis. In…
Descriptors: Sample Size, Computation, Meta Analysis, Bayesian Statistics
Kush, Joseph M.; Konold, Timothy R.; Bradshaw, Catherine P. – Grantee Submission, 2021
Power in multilevel models remains an area of interest to both methodologists and substantive researchers. In two-level designs, the total sample is a function of both the number of level-2 (e.g., schools) clusters and the average number of level-1 (e.g., classrooms) units per cluster. Traditional multilevel power calculations rely on either the…
Descriptors: Multivariate Analysis, Randomized Controlled Trials, Monte Carlo Methods, Sample Size
Joshua B. Gilbert; James S. Kim; Luke W. Miratrix – Annenberg Institute for School Reform at Brown University, 2024
Longitudinal models of individual growth typically emphasize between-person predictors of change but ignore how growth may vary "within" persons because each person contributes only one point at each time to the model. In contrast, modeling growth with multi-item assessments allows evaluation of how relative item performance may shift…
Descriptors: Vocabulary Development, Item Response Theory, Test Items, Student Development
Joshua B. Gilbert; James S. Kim; Luke W. Miratrix – Applied Measurement in Education, 2024
Longitudinal models typically emphasize between-person predictors of change but ignore how growth varies "within" persons because each person contributes only one data point at each time. In contrast, modeling growth with multi-item assessments allows evaluation of how relative item performance may shift over time. While traditionally…
Descriptors: Vocabulary Development, Item Response Theory, Test Items, Student Development
Huang, Francis L. – Educational and Psychological Measurement, 2018
Cluster randomized trials involving participants nested within intact treatment and control groups are commonly performed in various educational, psychological, and biomedical studies. However, recruiting and retaining intact groups present various practical, financial, and logistical challenges to evaluators and often, cluster randomized trials…
Descriptors: Multivariate Analysis, Sampling, Statistical Inference, Data Analysis
Kelcey, Benjamin; Dong, Nianbo; Spybrook, Jessaca; Cox, Kyle – Journal of Educational and Behavioral Statistics, 2017
Designs that facilitate inferences concerning both the total and indirect effects of a treatment potentially offer a more holistic description of interventions because they can complement "what works" questions with the comprehensive study of the causal connections implied by substantive theories. Mapping the sensitivity of designs to…
Descriptors: Statistical Analysis, Randomized Controlled Trials, Mediation Theory, Models
Heyvaert, Mieke; Moeyaert, Mariola; Verkempynck, Paul; Van den Noortgate, Wim; Vervloet, Marlies; Ugille, Maaike; Onghena, Patrick – Journal of Experimental Education, 2017
This article reports on a Monte Carlo simulation study, evaluating two approaches for testing the intervention effect in replicated randomized AB designs: two-level hierarchical linear modeling (HLM) and using the additive method to combine randomization test "p" values (RTcombiP). Four factors were manipulated: mean intervention effect,…
Descriptors: Monte Carlo Methods, Simulation, Intervention, Replication (Evaluation)
Deke, John; Wei, Thomas; Kautz, Tim – National Center for Education Evaluation and Regional Assistance, 2017
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen (1988) characterized as "small." While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts…
Descriptors: Intervention, Educational Research, Research Problems, Statistical Bias

Peer reviewed
Direct link
