Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 14 |
Since 2006 (last 20 years) | 35 |
Descriptor
Computation | 37 |
Correlation | 37 |
Effect Size | 37 |
Statistical Analysis | 19 |
Sample Size | 12 |
Meta Analysis | 11 |
Research Design | 8 |
Educational Research | 7 |
Intervention | 7 |
Research Methodology | 7 |
Hypothesis Testing | 6 |
More ▼ |
Source
Author
Porter, Kristin E. | 3 |
David A. Klingbeil | 2 |
Ethan R. Van Norman | 2 |
Hedges, Larry V. | 2 |
Jaclin Boorse | 2 |
Acar, Selcuk | 1 |
Aloe, Ariel M. | 1 |
Banjanovic, Erin S. | 1 |
Barnes, Marcia | 1 |
Becker, Betsy Jane | 1 |
Black, Sandra E. | 1 |
More ▼ |
Publication Type
Journal Articles | 30 |
Reports - Research | 21 |
Reports - Evaluative | 8 |
Information Analyses | 5 |
Reports - Descriptive | 4 |
Guides - Non-Classroom | 3 |
Dissertations/Theses -… | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Elementary Education | 7 |
Higher Education | 4 |
Junior High Schools | 3 |
Middle Schools | 3 |
Postsecondary Education | 3 |
Secondary Education | 3 |
Elementary Secondary Education | 2 |
Grade 4 | 2 |
Grade 5 | 2 |
Grade 8 | 2 |
High Schools | 2 |
More ▼ |
Audience
Researchers | 3 |
Laws, Policies, & Programs
Assessments and Surveys
Early Childhood Longitudinal… | 1 |
National Assessment of… | 1 |
National Survey of Student… | 1 |
Program for International… | 1 |
Self Description Questionnaire | 1 |
What Works Clearinghouse Rating
Larry V. Hedges; William R. Shadish; Prathiba Natesan Batley – Grantee Submission, 2022
Currently the design standards for single case experimental designs (SCEDs) are based on validity considerations as prescribed by the What Works Clearinghouse. However, there is a need for design considerations such as power based on statistical analyses. We compute and derive power using computations for (AB)[superscript k] designs with multiple…
Descriptors: Statistical Analysis, Research Design, Computation, Case Studies
Ethan R. Van Norman; Jaclin Boorse; David A. Klingbeil – Grantee Submission, 2024
Despite the increased number of quantitative effect sizes developed for single-case experimental designs (SCEDs), visual analysis remains the gold standard for evaluating methodological rigor of SCEDs and determining whether a functional relation between the treatment and the outcome exists. The physical length and range of values plotted on x and…
Descriptors: Visual Aids, Outcomes of Education, Oral Reading, Reading Comprehension
Ethan R. Van Norman; Jaclin Boorse; David A. Klingbeil – Journal of Behavioral Education, 2024
Despite the increased number of quantitative effect sizes developed for single-case experimental designs (SCEDs), visual analysis remains the gold standard for evaluating methodological rigor of SCEDs and determining whether a functional relation between the treatment and the outcome exists. The physical length and range of values plotted on x and…
Descriptors: Visual Aids, Outcomes of Education, Oral Reading, Reading Comprehension
Moeyaert, Mariola – Behavioral Disorders, 2019
Multilevel meta-analysis is an innovative synthesis technique used for the quantitative integration of effect size estimates across participants and across studies. The quantitative summary allows for objective, evidence-based, and informed decisions in research, practice, and policy. Based on previous methodological work, the technique results in…
Descriptors: Meta Analysis, Evidence, Correlation, Predictor Variables
Trafimow, David – Educational and Psychological Measurement, 2018
Because error variance alternatively can be considered to be the sum of systematic variance associated with unknown variables and randomness, a tripartite assumption is proposed that total variance in the dependent variable can be partitioned into three variance components. These are variance in the dependent variable that is explained by the…
Descriptors: Statistical Analysis, Correlation, Experiments, Effect Size
Banjanovic, Erin S.; Osborne, Jason W. – Practical Assessment, Research & Evaluation, 2016
Confidence intervals for effect sizes (CIES) provide readers with an estimate of the strength of a reported statistic as well as the relative precision of the point estimate. These statistics offer more information and context than null hypothesis statistic testing. Although confidence intervals have been recommended by scholars for many years,…
Descriptors: Computation, Statistical Analysis, Effect Size, Sampling
Porter, Kristin E. – Journal of Research on Educational Effectiveness, 2018
Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) are statistical…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Gorard, Stephen – International Journal of Research & Method in Education, 2015
This paper revisits the use of effect sizes in the analysis of experimental and similar results, and reminds readers of the relative advantages of the mean absolute deviation as a measure of variation, as opposed to the more complex standard deviation. The mean absolute deviation is easier to use and understand, and more tolerant of extreme…
Descriptors: Effect Size, Computation, Comparative Analysis, Simulation
Porter, Kristin E. – Grantee Submission, 2017
Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) are statistical…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Porter, Kristin E. – MDRC, 2016
In education research and in many other fields, researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Weller, Susan C. – Field Methods, 2015
This article presents a simple approach to making quick sample size estimates for basic hypothesis tests. Although there are many sources available for estimating sample sizes, methods are not often integrated across statistical tests, levels of measurement of variables, or effect sizes. A few parameters are required to estimate sample sizes and…
Descriptors: Sample Size, Statistical Analysis, Computation, Hypothesis Testing
E. C. Hedberg – Grantee Submission, 2016
Background: There is an increased focus on randomized trials for proximal behavioral outcomes in early childhood research. However, planning sample sizes for such designs requires extant information on the size of effect, variance decomposition, and effectiveness of covariates. Objectives: The purpose of this article is to employ a recent large…
Descriptors: Randomized Controlled Trials, Kindergarten, Children, Longitudinal Studies
Lai, Mark H. C.; Kwok, Oi-Man – Journal of Educational and Behavioral Statistics, 2014
Multilevel modeling techniques are becoming more popular in handling data with multilevel structure in educational and behavioral research. Recently, researchers have paid more attention to cross-classified data structure that naturally arises in educational settings. However, unlike traditional single-level research, methodological studies about…
Descriptors: Hierarchical Linear Modeling, Differences, Effect Size, Computation
Acar, Selcuk; Sen, Sedat; Cayirdag, Nur – Gifted Child Quarterly, 2016
Current approaches to gifted identification suggest collecting multiple sources of evidence. Some gifted identification guidelines allow for the interchangeable use of "performance" and "nonperformance" identification methods. This multiple criteria approach lacks a strong overlap between the assessment tools; however,…
Descriptors: Academically Gifted, Talent Identification, Meta Analysis, Performance
Ost, Ben; Gangopadhyaya, Anuj; Schiman, Jeffrey C. – Education Economics, 2017
Studies using tests scores as the dependent variable often report point estimates in student standard deviation units. We note that a standard deviation is not a standard unit of measurement since the distribution of test scores can vary across contexts. As such, researchers should be cautious when interpreting differences in the numerical size of…
Descriptors: Scores, Statistical Analysis, Measurement, Computation