Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 11 |
| Since 2017 (last 10 years) | 15 |
| Since 2007 (last 20 years) | 19 |
Descriptor
| Mathematical Formulas | 38 |
| Research Design | 38 |
| Statistical Analysis | 18 |
| Sample Size | 16 |
| Computation | 9 |
| Analysis of Variance | 7 |
| Hypothesis Testing | 7 |
| Monte Carlo Methods | 7 |
| Randomized Controlled Trials | 7 |
| Sampling | 7 |
| Effect Size | 6 |
| More ▼ | |
Source
Author
| Ben Kelcey | 2 |
| Benjamin Kelcey | 2 |
| Fangxing Bai | 2 |
| Guo, Jiin-Huarng | 2 |
| Kyle Cox | 2 |
| Luh, Wei-Ming | 2 |
| Nianbo Dong | 2 |
| Yanli Xie | 2 |
| Algina, James | 1 |
| Allal, Linda | 1 |
| Allison, David B. | 1 |
| More ▼ | |
Publication Type
Education Level
Audience
| Researchers | 5 |
| Administrators | 1 |
| Practitioners | 1 |
Location
| New Jersey | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| National Assessment of… | 1 |
What Works Clearinghouse Rating
Eli Ben-Michael; Lindsay Page; Luke Keele – Grantee Submission, 2024
In a clustered observational study, a treatment is assigned to groups and all units within the group are exposed to the treatment. We develop a new method for statistical adjustment in clustered observational studies using approximate balancing weights, a generalization of inverse propensity score weights that solve a convex optimization problem…
Descriptors: Research Design, Statistical Data, Multivariate Analysis, Observation
Nianbo Dong; Benjamin Kelcey; Jessaca Spybrook – Journal of Experimental Education, 2024
Multisite cluster randomized trials (MCRTs), in which, the intermediate-level clusters (e.g., classrooms) are randomly assigned to the treatment or control condition within each site (e.g., school), are among the most commonly used experimental designs across a broad range of disciplines. MCRTs often align with the theory that programs are…
Descriptors: Research Design, Randomized Controlled Trials, Statistical Analysis, Sample Size
Cox, Kyle; Kelcey, Benjamin – American Journal of Evaluation, 2023
Analysis of the differential treatment effects across targeted subgroups and contexts is a critical objective in many evaluations because it delineates for whom and under what conditions particular programs, therapies or treatments are effective. Unfortunately, it is unclear how to plan efficient and effective evaluations that include these…
Descriptors: Statistical Analysis, Research Design, Cluster Grouping, Sample Size
Liyang Sun; Jesse M. Shapiro – Grantee Submission, 2022
Linear panel models featuring unit and time fixed effects appear in many areas of empirical economics. An active literature studies the interpretation of the ordinary least squares estimator of the model, commonly called the two-way fixed effects (TWFE) estimator, in the presence of unmodeled coefficient heterogeneity. We illustrate some…
Descriptors: Statistical Analysis, Computation, Research Design, Economics
Amota Ataneka; Fangxing Bai; Yanli Xie; Ben Kelcey; Nianbo Dong; Leigh McLean; Geoffrey Phelps – Society for Research on Educational Effectiveness, 2024
Background: The analysis of mediating mechanisms attempts to examine the impact of, for example, teacher training by dismantling and testing the core components that comprise an underlying theory and program. This process is seen as a critical step in evaluating programs and mapping teacher development because it provides empirical tests of each…
Descriptors: Research Design, Mediation Theory, Literacy, English Instruction
Kyle Cox; Ben Kelcey; Hannah Luce – Journal of Experimental Education, 2024
Comprehensive evaluation of treatment effects is aided by considerations for moderated effects. In educational research, the combination of natural hierarchical structures and prevalence of group-administered or shared facilitator treatments often produces three-level partially nested data structures. Literature details planning strategies for a…
Descriptors: Randomized Controlled Trials, Monte Carlo Methods, Hierarchical Linear Modeling, Educational Research
Clintin P. Davis-Stober; Jason Dana; David Kellen; Sara D. McMullin; Wes Bonifay – Grantee Submission, 2023
Conducting research with human subjects can be difficult because of limited sample sizes and small empirical effects. We demonstrate that this problem can yield patterns of results that are practically indistinguishable from flipping a coin to determine the direction of treatment effects. We use this idea of random conclusions to establish a…
Descriptors: Research Methodology, Sample Size, Effect Size, Hypothesis Testing
Eric C. Hedberg – Grantee Submission, 2023
In cluster randomized evaluations, a treatment or intervention is randomly assigned to a set of clusters each with constituent individual units of observations (e.g., student units that attend schools, which are assigned to treatment). One consideration of these designs is how many units are needed per cluster to achieve adequate statistical…
Descriptors: Statistical Analysis, Multivariate Analysis, Randomized Controlled Trials, Research Design
E. C. Hedberg – American Journal of Evaluation, 2023
In cluster randomized evaluations, a treatment or intervention is randomly assigned to a set of clusters each with constituent individual units of observations (e.g., student units that attend schools, which are assigned to treatment). One consideration of these designs is how many units are needed per cluster to achieve adequate statistical…
Descriptors: Statistical Analysis, Multivariate Analysis, Randomized Controlled Trials, Research Design
Fangxing Bai; Benjamin Kelcey; Yanli Xie; Kyle Cox – Society for Research on Educational Effectiveness, 2022
Background: Regression Discontinuous Design (RDD) is widely used in educational studies. Through RDD, researchers can obtain unbiased results when Randomized Experimental Design (RED) is inaccessible. Compared to RED, the RDD only requires a cut score variable (continuous) and a cutoff value to assign students to the treatment or control groups.…
Descriptors: Research Design, Regression (Statistics), Hierarchical Linear Modeling, Mediation Theory
Li, Wei; Dong, Nianbo; Maynarad, Rebecca; Spybrook, Jessaca; Kelcey, Ben – Journal of Research on Educational Effectiveness, 2023
Cluster randomized trials (CRTs) are commonly used to evaluate educational interventions, particularly their effectiveness. Recently there has been greater emphasis on using these trials to explore cost-effectiveness. However, methods for establishing the power of cluster randomized cost-effectiveness trials (CRCETs) are limited. This study…
Descriptors: Research Design, Statistical Analysis, Randomized Controlled Trials, Cost Effectiveness
Xinran Li; Peng Ding; Donald B. Rubin – Grantee Submission, 2020
With many pretreatment covariates and treatment factors, the classical factorial experiment often fails to balance covariates across multiple factorial effects simultaneously. Therefore, it is intuitive to restrict the randomization of the treatment factors to satisfy certain covariate balance criteria, possibly conforming to the tiers of…
Descriptors: Experiments, Research Design, Randomized Controlled Trials, Sampling
Qian, Jiahe – ETS Research Report Series, 2017
The variance formula derived for a two-stage sampling design without replacement employs the joint inclusion probabilities in the first-stage selection of clusters. One of the difficulties encountered in data analysis is the lack of information about such joint inclusion probabilities. One way to solve this issue is by applying Hájek's…
Descriptors: Mathematical Formulas, Computation, Sampling, Research Design
Bulus, Metin – ProQuest LLC, 2017
In education, sample characteristics can be complex due to the nested structure of students, teachers, classrooms, schools, and districts. In the past, not many considerations were given to such complex sampling schemes in statistical power analysis. More recently in the past two decades, however, education scholars have developed tools to conduct…
Descriptors: Educational Research, Regression (Statistics), Research Design, Statistical Analysis
Deke, John; Wei, Thomas; Kautz, Tim – National Center for Education Evaluation and Regional Assistance, 2017
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen (1988) characterized as "small." While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts…
Descriptors: Intervention, Educational Research, Research Problems, Statistical Bias

Peer reviewed
Direct link
