NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers1
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing all 12 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Uwimpuhwe, Germaine; Singh, Akansha; Higgins, Steve; Kasim, Adetayo – International Journal of Research & Method in Education, 2021
Educational researchers advocate the use of an effect size and its confidence interval to assess the effectiveness of interventions instead of relying on a p-value, which has been blamed for lack of reproducibility of research findings and the misuse of statistics. The aim of this study is to provide a framework, which can provide direct evidence…
Descriptors: Educational Research, Randomized Controlled Trials, Bayesian Statistics, Effect Size
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Deke, John; Finucane, Mariel; Thal, Daniel – National Center for Education Evaluation and Regional Assistance, 2022
BASIE is a framework for interpreting impact estimates from evaluations. It is an alternative to null hypothesis significance testing. This guide walks researchers through the key steps of applying BASIE, including selecting prior evidence, reporting impact estimates, interpreting impact estimates, and conducting sensitivity analyses. The guide…
Descriptors: Bayesian Statistics, Educational Research, Data Interpretation, Hypothesis Testing
Batley, Prathiba Natesan; Minka, Tom; Hedges, Larry Vernon – Grantee Submission, 2020
Immediacy is one of the necessary criteria to show strong evidence of treatment effect in single case experimental designs (SCEDs). With the exception of Natesan and Hedges (2017) no inferential statistical tool has been used to demonstrate or quantify it until now. We investigate and quantify immediacy by treating the change-points between the…
Descriptors: Bayesian Statistics, Monte Carlo Methods, Statistical Inference, Markov Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Marsman, Maarten; Wagenmakers, Eric-Jan – Educational and Psychological Measurement, 2017
P values have been critiqued on several grounds but remain entrenched as the dominant inferential method in the empirical sciences. In this article, we elaborate on the fact that in many statistical models, the one-sided "P" value has a direct Bayesian interpretation as the approximate posterior mass for values lower than zero. The…
Descriptors: Bayesian Statistics, Statistical Inference, Probability, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Uwimpuhwe, Germaine; Singh, Akansha; Higgins, Steve; Coux, Mickael; Xiao, ZhiMin; Shkedy, Ziv; Kasim, Adetayo – Journal of Experimental Education, 2022
Educational stakeholders are keen to know the magnitude and importance of different interventions. However, the way evidence is communicated to support understanding of the effectiveness of an intervention is controversial. Typically studies in education have used the standardised mean difference as a measure of the impact of interventions. This…
Descriptors: Program Effectiveness, Intervention, Multivariate Analysis, Bayesian Statistics
Crawford, Aaron – ProQuest LLC, 2014
This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex…
Descriptors: Bayesian Statistics, Networks, Models, Goodness of Fit
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, HwaYoung; Beretvas, S. Natasha – Educational and Psychological Measurement, 2014
Conventional differential item functioning (DIF) detection methods (e.g., the Mantel-Haenszel test) can be used to detect DIF only across observed groups, such as gender or ethnicity. However, research has found that DIF is not typically fully explained by an observed variable. True sources of DIF may include unobserved, latent variables, such as…
Descriptors: Item Analysis, Factor Structure, Bayesian Statistics, Goodness of Fit
Peer reviewed Peer reviewed
Direct linkDirect link
Miller, Jeff; Schwarz, Wolf – Psychological Methods, 2011
We study a model of the research process in which the true effect size, the replication jitter due to changes in experimental procedure, and the statistical error of effect size measurement are all normally distributed random variables. Within this model, we analyze the probability of successfully replicating an initial experimental result by…
Descriptors: Models, Research, Effect Size, Probability
Peer reviewed Peer reviewed
Direct linkDirect link
Gallistel, C. R. – Psychological Review, 2009
Null hypotheses are simple, precise, and theoretically important. Conventional statistical analysis cannot support them; Bayesian analysis can. The challenge in a Bayesian analysis is to formulate a suitably vague alternative, because the vaguer the alternative is (the more it spreads out the unit mass of prior probability), the more the null is…
Descriptors: Bayesian Statistics, Statistical Analysis, Probability, Hypothesis Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Iverson, Geoffrey J.; Wagenmakers, Eric-Jan; Lee, Michael D. – Psychological Methods, 2010
The purpose of the recently proposed "p[subscript rep]" statistic is to estimate the probability of concurrence, that is, the probability that a replicate experiment yields an effect of the same sign (Killeen, 2005a). The influential journal "Psychological Science" endorses "p[subscript rep]" and recommends its use…
Descriptors: Effect Size, Evaluation Methods, Probability, Experiments
Peer reviewed Peer reviewed
Direct linkDirect link
Cumming, Geoff – Psychological Methods, 2010
This comment offers three descriptions of "p[subscript rep]" that start with a frequentist account of confidence intervals, draw on R. A. Fisher's fiducial argument, and do not make Bayesian assumptions. Links are described among "p[subscript rep]," "p" values, and the probability a confidence interval will capture…
Descriptors: Replication (Evaluation), Measurement Techniques, Research Methodology, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Hoffman, Bobby; Schraw, Gregory – Educational Psychologist, 2010
The purpose of this article is to clarify conceptions, definitions, and applications of learning and problem-solving efficiency. Conceptions of efficiency vary within the field of educational psychology, and there is little consensus as to how to define, measure, and interpret the efficiency construct. We compare three diverse models that differ…
Descriptors: Educational Psychology, Efficiency, Problem Solving, Models