Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 2 |
| Since 2017 (last 10 years) | 6 |
| Since 2007 (last 20 years) | 35 |
Descriptor
| Experiments | 40 |
| Sample Size | 40 |
| Statistical Analysis | 16 |
| Research Design | 15 |
| Effect Size | 13 |
| Correlation | 12 |
| Computation | 8 |
| Intervention | 8 |
| Comparative Analysis | 7 |
| Control Groups | 7 |
| Probability | 7 |
| More ▼ | |
Source
Author
Publication Type
| Journal Articles | 33 |
| Reports - Research | 25 |
| Reports - Descriptive | 7 |
| Reports - Evaluative | 6 |
| Information Analyses | 3 |
| Speeches/Meeting Papers | 2 |
| Guides - Non-Classroom | 1 |
Education Level
| Higher Education | 6 |
| Elementary Secondary Education | 2 |
| Elementary Education | 1 |
| Postsecondary Education | 1 |
| Primary Education | 1 |
Audience
| Researchers | 1 |
| Students | 1 |
| Teachers | 1 |
Laws, Policies, & Programs
| No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
| Comprehensive Tests of Basic… | 1 |
| Early Childhood Longitudinal… | 1 |
What Works Clearinghouse Rating
Eric C. Hedberg – Grantee Submission, 2023
In cluster randomized evaluations, a treatment or intervention is randomly assigned to a set of clusters each with constituent individual units of observations (e.g., student units that attend schools, which are assigned to treatment). One consideration of these designs is how many units are needed per cluster to achieve adequate statistical…
Descriptors: Statistical Analysis, Multivariate Analysis, Randomized Controlled Trials, Research Design
E. C. Hedberg – American Journal of Evaluation, 2023
In cluster randomized evaluations, a treatment or intervention is randomly assigned to a set of clusters each with constituent individual units of observations (e.g., student units that attend schools, which are assigned to treatment). One consideration of these designs is how many units are needed per cluster to achieve adequate statistical…
Descriptors: Statistical Analysis, Multivariate Analysis, Randomized Controlled Trials, Research Design
Hong, Sanghyun; Reed, W. Robert – Research Synthesis Methods, 2021
The purpose of this study is to show how Monte Carlo analysis of meta-analytic estimators can be used to select estimators for specific research situations. Our analysis conducts 1620 individual experiments, where each experiment is defined by a unique combination of sample size, effect size, effect size heterogeneity, publication selection…
Descriptors: Monte Carlo Methods, Meta Analysis, Research Methodology, Experiments
Barnow, Burt S.; Greenberg, David H. – American Journal of Evaluation, 2020
This paper reviews the use of multiple trials, defined as multiple sites or multiple arms in a single evaluation and replications, in evaluating social programs. After defining key terms, the paper discusses the rationales for conducting multiple trials, which include increasing sample size to increase statistical power; identifying the most…
Descriptors: Evaluation, Randomized Controlled Trials, Experiments, Replication (Evaluation)
Trafimow, David – Educational and Psychological Measurement, 2018
Because error variance alternatively can be considered to be the sum of systematic variance associated with unknown variables and randomness, a tripartite assumption is proposed that total variance in the dependent variable can be partitioned into three variance components. These are variance in the dependent variable that is explained by the…
Descriptors: Statistical Analysis, Correlation, Experiments, Effect Size
Gagnon-Bartsch, J. A.; Sales, A. C.; Wu, E.; Botelho, A. F.; Erickson, J. A.; Miratrix, L. W.; Heffernan, N. T. – Grantee Submission, 2019
Randomized controlled trials (RCTs) admit unconfounded design-based inference--randomization largely justifies the assumptions underlying statistical effect estimates--but often have limited sample sizes. However, researchers may have access to big observational data on covariates and outcomes from RCT non-participants. For example, data from A/B…
Descriptors: Randomized Controlled Trials, Educational Research, Prediction, Algorithms
Hedges, Larry V.; Borenstein, Michael – Journal of Educational and Behavioral Statistics, 2014
The precision of estimates of treatment effects in multilevel experiments depends on the sample sizes chosen at each level. It is often desirable to choose sample sizes at each level to obtain the smallest variance for a fixed total cost, that is, to obtain optimal sample allocation. This article extends previous results on optimal allocation to…
Descriptors: Experiments, Research Design, Sample Size, Correlation
Csibra, Gergely; Hernik, Mikolaj; Mascaro, Olivier; Tatone, Denis; Lengyel, Máté – Developmental Psychology, 2016
Looking times (LTs) are frequently measured in empirical research on infant cognition. We analyzed the statistical distribution of LTs across participants to develop recommendations for their treatment in infancy research. Our analyses focused on a common within-subject experimental design, in which longer looking to novel or unexpected stimuli is…
Descriptors: Eye Movements, Time, Statistical Distributions, Infants
Meier, Kimberly M.; Blair, Mark R. – Cognition, 2013
The current study investigates the relative extent to which information utility and planning efficiency guide information-sampling strategies in a classification task. Prior research has pointed to the importance of probability gain, the degree to which sampling a feature reduces the chance of error, in contexts where participants are restricted…
Descriptors: Sampling, Probability, Experiments, Eye Movements
Ryan, Wendy L.; St. Iago-McRae, Ezry – Bioscene: Journal of College Biology Teaching, 2016
Experimentation is the foundation of science and an important process for students to understand and experience. However, it can be difficult to teach some aspects of experimentation within the time and resource constraints of an academic semester. Interactive models can be a useful tool in bridging this gap. This freely accessible simulation…
Descriptors: Research Design, Simulation, Animals, Animal Behavior
Drummond, Gordon B.; Vowler, Sarah L. – Advances in Physiology Education, 2013
This final article in the authors' series draws together some of the ideas they have addressed, and suggests important "ingredients" that make a paper palatable to the reviewer and the reader. These ingredients include: (1) Describe the methods; (2) Plan the analysis; (3) Design the study; (4) Use the correct experimental unit; and (5)…
Descriptors: Experiments, Physiology, Science Education, Science Instruction
Walsh, Rosalind L.; Kemp, Coral – Gifted Child Quarterly, 2013
Single-subject experimental designs have long been used in special education to evaluate the effectiveness of interventions for exceptional children. However, the design has not been used widely in gifted education. In this article, an overview of the main features of single-subject design is presented, and its potential for application in gifted…
Descriptors: Gifted, Intervention, Experiments, Special Education
Shadish, William; Hedges, Larry; Pustejovsky, James; Rindskopf, David – Society for Research on Educational Effectiveness, 2012
Over the last 10 years, numerous authors have proposed effect size estimators for single-case designs. None, however, has been shown to be equivalent to the usual between-groups standardized mean difference statistic, sometimes called d. The present paper remedies that omission. Most effect size estimators for single-case designs use the…
Descriptors: Effect Size, Experiments, Sample Size, Comparative Analysis
Murray, Arthur; Hart, Ian – Physics Education, 2012
The "radioactive dice" experiment is a commonly used classroom analogue to model the decay of radioactive nuclei. However, the value of the half-life obtained from this experiment differs significantly from that calculated for real nuclei decaying exponentially with the same decay constant. This article attempts to explain the discrepancy and…
Descriptors: Science Experiments, Intervals, Experiments, Prediction
Price, Paul C.; Kimura, Nicole M.; Smith, Andrew R.; Marshall, Lindsay D. – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2014
Previous research has shown that people exhibit a sample size bias when judging the average of a set of stimuli on a single dimension. The more stimuli there are in the set, the greater people judge the average to be. This effect has been demonstrated reliably for judgments of the average likelihood that groups of people will experience negative,…
Descriptors: Sample Size, Statistical Bias, Visual Perception, Pictorial Stimuli

Peer reviewed
Direct link
