Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 1 |
| Since 2017 (last 10 years) | 3 |
| Since 2007 (last 20 years) | 5 |
Descriptor
| Effect Size | 12 |
| Hypothesis Testing | 12 |
| Sample Size | 8 |
| Correlation | 5 |
| Educational Research | 5 |
| Intervention | 5 |
| Statistical Analysis | 5 |
| Evaluation Methods | 4 |
| Monte Carlo Methods | 4 |
| Power (Statistics) | 4 |
| Research Methodology | 4 |
| More ▼ | |
Source
| Grantee Submission | 1 |
| Journal of Research on… | 1 |
| Journal of Special Education | 1 |
| MDRC | 1 |
| National Center for Education… | 1 |
| Review of Educational Research | 1 |
Author
Publication Type
| Reports - Research | 6 |
| Speeches/Meeting Papers | 6 |
| Guides - Non-Classroom | 4 |
| Journal Articles | 3 |
| Information Analyses | 2 |
| Opinion Papers | 2 |
| Reports - Descriptive | 1 |
Education Level
| Elementary Secondary Education | 1 |
| Higher Education | 1 |
Audience
| Researchers | 12 |
| Practitioners | 1 |
Location
Laws, Policies, & Programs
Assessments and Surveys
| Stanford Diagnostic Reading… | 1 |
What Works Clearinghouse Rating
Deke, John; Finucane, Mariel; Thal, Daniel – National Center for Education Evaluation and Regional Assistance, 2022
BASIE is a framework for interpreting impact estimates from evaluations. It is an alternative to null hypothesis significance testing. This guide walks researchers through the key steps of applying BASIE, including selecting prior evidence, reporting impact estimates, interpreting impact estimates, and conducting sensitivity analyses. The guide…
Descriptors: Bayesian Statistics, Educational Research, Data Interpretation, Hypothesis Testing
Porter, Kristin E. – Journal of Research on Educational Effectiveness, 2018
Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) are statistical…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Porter, Kristin E. – Grantee Submission, 2017
Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) are statistical…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Porter, Kristin E. – MDRC, 2016
In education research and in many other fields, researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Roscoe, Rod D.; Chi, Michelene T. H. – Review of Educational Research, 2007
Prior research has established that peer tutors can benefit academically from their tutoring experiences. However, although tutor learning has been observed across diverse settings, the magnitude of these gains is often underwhelming. In this review, the authors consider how analyses of tutors' actual behaviors may help to account for variation in…
Descriptors: Prior Learning, Tutors, Methods, Training Methods
Becker, Betsy Jane – 1984
Power is an indicator of the ability of a statistical analysis to detect a phenomenon that does in fact exist. The issue of power is crucial for social science research because sample size, effects, and relationships studied tend to be small and the power of a study relates directly to the size of the effect of interest and the sample size.…
Descriptors: Effect Size, Hypothesis Testing, Meta Analysis, Power (Statistics)
Barcikowski, Robert S.; Robey, Randall R. – 1985
This paper provides researchers with a method of determining sample size for a given power level in the preparation of a single group exploratory repeated measure analysis. The rationale for determining sample size which takes into consideration the powers and assumptions of both the adjusted univariate and multivariate repeated measures tests is…
Descriptors: Analysis of Variance, Effect Size, Hypothesis Testing, Multivariate Analysis
Peer reviewedBrewer, James K.; Sindelar, Paul T. – Journal of Special Education, 1988
From a priori and post hoc data collection perspectives, this paper describes the interrelations among (1) power, alpha, effect size, and sample size for hypothesis testing; and (2) precision, confidence, and sample size for interval estimation. Implications for special education researchers working with convenient samples of fixed size are…
Descriptors: Data Collection, Disabilities, Educational Research, Effect Size
Becker, Betsy Jane – 1986
This paper discusses distribution theory and power computations for four common "tests of combined significance." These tests are calculated using one-sided sample probabilities or p values from independent studies (or hypothesis tests), and provide an overall significance level for the series of results. Noncentral asymptotic sampling…
Descriptors: Achievement Tests, Correlation, Effect Size, Hypothesis Testing
Thompson, Bruce – 1987
This paper evaluates the logic underlying various criticisms of statistical significance testing and makes specific recommendations for scientific and editorial practice that might better increase the knowledge base. Reliance on the traditional hypothesis testing model has led to a major bias against nonsignificant results and to misinterpretation…
Descriptors: Analysis of Variance, Data Interpretation, Editors, Effect Size
Chang, Lin; Becker, Betsy Jane – 1987
Data drawn from 30 journal articles and ERIC documents reporting on gender differences in natural science achievement were re-examined. Three meta-analysis methods were used: (1) vote counts and vote-counting estimation procedures; (2) tests of combined significance; and (3) analyses of effect sizes. The three methods produced seemingly…
Descriptors: Academic Achievement, Comparative Analysis, Effect Size, High Schools
Hummel, Thomas J.; Johnston, Charles B. – 1986
This study investigated seven methods for analyzing multivariate group differences. Bonferroni t statistics, multivariate analysis of variance (MANOVA) followed by analysis of variance (ANOVA), and five other methods were studied using Monte Carlo methods. Methods were compared with respect to (1) experimentwise error rate; (2) power; (3) number…
Descriptors: Analysis of Variance, Comparative Analysis, Correlation, Differences

Direct link
