NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20260
Since 20250
Since 2022 (last 5 years)7
Since 2017 (last 10 years)29
Audience
Researchers3
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 29 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Caspar J. Van Lissa; Eli-Boaz Clapper; Rebecca Kuiper – Research Synthesis Methods, 2024
The product Bayes factor (PBF) synthesizes evidence for an informative hypothesis across heterogeneous replication studies. It can be used when fixed- or random effects meta-analysis fall short. For example, when effect sizes are incomparable and cannot be pooled, or when studies diverge significantly in the populations, study designs, and…
Descriptors: Hypothesis Testing, Evaluation Methods, Replication (Evaluation), Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Vembye, Mikkel Helding; Pustejovsky, James Eric; Pigott, Therese Deocampo – Journal of Educational and Behavioral Statistics, 2023
Meta-analytic models for dependent effect sizes have grown increasingly sophisticated over the last few decades, which has created challenges for a priori power calculations. We introduce power approximations for tests of average effect sizes based upon several common approaches for handling dependent effect sizes. In a Monte Carlo simulation, we…
Descriptors: Meta Analysis, Robustness (Statistics), Statistical Analysis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Clintin P. Davis-Stober; Jason Dana; David Kellen; Sara D. McMullin; Wes Bonifay – Grantee Submission, 2023
Conducting research with human subjects can be difficult because of limited sample sizes and small empirical effects. We demonstrate that this problem can yield patterns of results that are practically indistinguishable from flipping a coin to determine the direction of treatment effects. We use this idea of random conclusions to establish a…
Descriptors: Research Methodology, Sample Size, Effect Size, Hypothesis Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Elliott, Mark; Buttery, Paula – Educational and Psychological Measurement, 2022
We investigate two non-iterative estimation procedures for Rasch models, the pair-wise estimation procedure (PAIR) and the Eigenvector method (EVM), and identify theoretical issues with EVM for rating scale model (RSM) threshold estimation. We develop a new procedure to resolve these issues--the conditional pairwise adjacent thresholds procedure…
Descriptors: Item Response Theory, Rating Scales, Computation, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Joshi, Megha; Pustejovsky, James E.; Beretvas, S. Natasha – Research Synthesis Methods, 2022
The most common and well-known meta-regression models work under the assumption that there is only one effect size estimate per study and that the estimates are independent. However, meta-analytic reviews of social science research often include multiple effect size estimates per primary study, leading to dependence in the estimates. Some…
Descriptors: Meta Analysis, Regression (Statistics), Models, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Xia, Yan; Green, Samuel B.; Xu, Yuning; Thompson, Marilyn S. – Educational and Psychological Measurement, 2019
Past research suggests revised parallel analysis (R-PA) tends to yield relatively accurate results in determining the number of factors in exploratory factor analysis. R-PA can be interpreted as a series of hypothesis tests. At each step in the series, a null hypothesis is tested that an additional factor accounts for zero common variance among…
Descriptors: Effect Size, Factor Analysis, Hypothesis Testing, Psychometrics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jane E. Miller – Numeracy, 2023
Students often believe that statistical significance is the only determinant of whether a quantitative result is "important." In this paper, I review traditional null hypothesis statistical testing to identify what questions inferential statistics can and cannot answer, including statistical significance, effect size and direction,…
Descriptors: Statistical Significance, Holistic Approach, Statistical Inference, Effect Size
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Deke, John; Finucane, Mariel; Thal, Daniel – National Center for Education Evaluation and Regional Assistance, 2022
BASIE is a framework for interpreting impact estimates from evaluations. It is an alternative to null hypothesis significance testing. This guide walks researchers through the key steps of applying BASIE, including selecting prior evidence, reporting impact estimates, interpreting impact estimates, and conducting sensitivity analyses. The guide…
Descriptors: Bayesian Statistics, Educational Research, Data Interpretation, Hypothesis Testing
Miocevic, Milica; Klaassen, Fayette; Geuke, Gemma; Moeyaert, Mariola; Maric, Marija – Grantee Submission, 2020
Single-Case Experimental Designs (SCEDs) have lately been recognized as a valuable alternative tolarge group studies. SCEDs form a great tool for the evaluation of treatment effectiveness in heterogeneous and low-incidence conditions, which are common in the field of communication disorders. Mediation analysis is indispensable in treatment…
Descriptors: Bayesian Statistics, Computation, Intervention, Case Studies
Hedges, Larry V.; Schauer, Jacob M. – Journal of Educational and Behavioral Statistics, 2019
The problem of assessing whether experimental results can be replicated is becoming increasingly important in many areas of science. It is often assumed that assessing replication is straightforward: All one needs to do is repeat the study and see whether the results of the original and replication studies agree. This article shows that the…
Descriptors: Replication (Evaluation), Research Design, Research Methodology, Program Evaluation
Hedges, Larry V.; Schauer, Jacob M. – Grantee Submission, 2019
The problem of assessing whether experimental results can be replicated is becoming increasingly important in many areas of science. It is often assumed that assessing replication is straightforward: All one needs to do is repeat the study and see whether the results of the original and replication studies agree. This article shows that the…
Descriptors: Replication (Evaluation), Research Design, Research Methodology, Program Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Eser, Mehmet Taha; Yurtçu, Meltem – International Journal of Progressive Education, 2020
This study examines meta-analysis studies published in the field of educational sciences from 2010-2019, in journals indexed within the scope of the TÜBITAK ULAKBIM TR Directory. Of the 163 studies scanned, 26 meta-analysis studies that meet the inclusion criteria constitute the sample of this content analysis study. Within this research, a…
Descriptors: Meta Analysis, Educational Research, Journal Articles, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Wiens, Stefan; Nilsson, Mats E. – Educational and Psychological Measurement, 2017
Because of the continuing debates about statistics, many researchers may feel confused about how to analyze and interpret data. Current guidelines in psychology advocate the use of effect sizes and confidence intervals (CIs). However, researchers may be unsure about how to extract effect sizes from factorial designs. Contrast analysis is helpful…
Descriptors: Data Analysis, Effect Size, Computation, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Simpson, Adrian – Educational Researcher, 2019
A recent paper uses Bayes factors to argue a large minority of rigorous, large-scale education RCTs are "uninformative." The definition of "uninformative" depends on the authors' hypothesis choices for calculating Bayes factors. These arguably overadjust for effect size inflation and involve a fixed prior distribution,…
Descriptors: Randomized Controlled Trials, Bayesian Statistics, Educational Research, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Wilcox, Rand R.; Serang, Sarfaraz – Educational and Psychological Measurement, 2017
The article provides perspectives on p values, null hypothesis testing, and alternative techniques in light of modern robust statistical methods. Null hypothesis testing and "p" values can provide useful information provided they are interpreted in a sound manner, which includes taking into account insights and advances that have…
Descriptors: Hypothesis Testing, Bayesian Statistics, Computation, Effect Size
Previous Page | Next Page »
Pages: 1  |  2