Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 2 |
| Since 2017 (last 10 years) | 5 |
| Since 2007 (last 20 years) | 8 |
Descriptor
| Evaluation Methods | 14 |
| Hypothesis Testing | 14 |
| Sample Size | 14 |
| Effect Size | 5 |
| Correlation | 4 |
| Educational Research | 4 |
| Monte Carlo Methods | 4 |
| Statistical Analysis | 4 |
| Bayesian Statistics | 3 |
| Computation | 3 |
| Error of Measurement | 3 |
| More ▼ | |
Source
Author
Publication Type
| Journal Articles | 10 |
| Guides - Non-Classroom | 4 |
| Reports - Research | 4 |
| Reports - Evaluative | 3 |
| Reports - Descriptive | 2 |
| Dissertations/Theses -… | 1 |
Education Level
| Higher Education | 1 |
Audience
| Researchers | 4 |
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Xiao Liu; Zhiyong Zhang; Lijuan Wang – Grantee Submission, 2024
In psychology, researchers are often interested in testing hypotheses about mediation, such as testing the presence of a mediation effect of a treatment (e.g., intervention assignment) on an outcome via a mediator. An increasingly popular approach to testing hypotheses is the Bayesian testing approach with Bayes factors (BFs). Despite the growing…
Descriptors: Sample Size, Bayesian Statistics, Programming Languages, Simulation
Caspar J. Van Lissa; Eli-Boaz Clapper; Rebecca Kuiper – Research Synthesis Methods, 2024
The product Bayes factor (PBF) synthesizes evidence for an informative hypothesis across heterogeneous replication studies. It can be used when fixed- or random effects meta-analysis fall short. For example, when effect sizes are incomparable and cannot be pooled, or when studies diverge significantly in the populations, study designs, and…
Descriptors: Hypothesis Testing, Evaluation Methods, Replication (Evaluation), Sample Size
Porter, Kristin E. – Journal of Research on Educational Effectiveness, 2018
Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) are statistical…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Spencer, Neil H.; Lay, Margaret; Kevan de Lopez, Lindsey – International Journal of Social Research Methodology, 2017
When undertaking quantitative hypothesis testing, social researchers need to decide whether the data with which they are working is suitable for parametric analyses to be used. When considering the relevant assumptions they can examine graphs and summary statistics but the decision making process is subjective and must also take into account the…
Descriptors: Evaluation Methods, Decision Making, Hypothesis Testing, Social Science Research
Porter, Kristin E. – Grantee Submission, 2017
Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) are statistical…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Porter, Kristin E. – MDRC, 2016
In education research and in many other fields, researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Spencer, Bryden – ProQuest LLC, 2016
Value-added models are a class of growth models used in education to assign responsibility for student growth to teachers or schools. For value-added models to be used fairly, sufficient statistical precision is necessary for accurate teacher classification. Previous research indicated precision below practical limits. An alternative approach has…
Descriptors: Monte Carlo Methods, Comparative Analysis, Accuracy, High Stakes Tests
Slocum-Gori, Suzanne L.; Zumbo, Bruno D. – Social Indicators Research, 2011
Whenever one uses a composite scale score from item responses, one is tacitly assuming that the scale is dominantly unidimensional. Investigating the unidimensionality of item response data is an essential component of construct validity. Yet, there is no universally accepted technique or set of rules to determine the number of factors to retain…
Descriptors: Sample Size, Construct Validity, Measures (Individuals), Hypothesis Testing
Peer reviewedSchneider, Anne L.; Darcy, Robert E. – Evaluation Review, 1984
The normative implications of applying significance tests in evaluation research are examined. The authors conclude that evaluators often make normative decisions, based on the traditional .05 significance level in studies with small samples. Additional reporting of the magnitude of impact, the significance level, and the power of the test is…
Descriptors: Evaluation Methods, Hypothesis Testing, Research Methodology, Research Problems
Asraf, Ratnawati Mohd; Brewer, James K. – Australian Educational Researcher, 2004
This article addresses the importance of obtaining a sample of an adequate size for the purpose of testing hypotheses. The logic underlying the requirement for a minimum sample size for hypothesis testing is discussed, as well as the criteria for determining it. Implications for researchers working with convenient samples of a fixed size are also…
Descriptors: Hypothesis Testing, Sample Size, Sampling, Research Methodology
Peer reviewedKaplan, David – Multivariate Behavioral Research, 1990
A strategy for evaluating/modifying covariance structure models (CSMs) is presented. The approach uses recent developments in estimation under nonstandard conditions and unified asymptotic theory related to hypothesis testing, and it determines the extent of sample size sensitivity and specification error effects by relying on existing statistical…
Descriptors: Error of Measurement, Estimation (Mathematics), Evaluation Methods, Goodness of Fit
Peer reviewedConquest, Loveday L. – Environmental Monitoring and Assessment, 1993
Presents two statistical topics and examples of their use in natural resource monitoring. The first topic deals with use of correlated observations in calculations of variance estimates for a regional mean, required sample size determination, and confidence intervals. The second topic concerns the use of Bayesian techniques in hypothesis testing.…
Descriptors: Bayesian Statistics, Environmental Education, Environmental Research, Evaluation Methods
Kahn, Jeffrey H. – Counseling Psychologist, 2006
Exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) have contributed to test development and validation in counseling psychology, but additional applications have not been fully realized. The author presents an overview of the goals, terminology, and procedures of factor analysis; reviews best practices for extracting,…
Descriptors: Factor Analysis, Counseling Psychology, Objectives, Guidelines
Long, Jeffrey D. – Psychological Methods, 2005
Often quantitative data in the social sciences have only ordinal justification. Problems of interpretation can arise when least squares multiple regression (LSMR) is used with ordinal data. Two ordinal alternatives are discussed, dominance-based ordinal multiple regression (DOMR) and proportional odds multiple regression. The Q[superscript 2]…
Descriptors: Simulation, Social Science Research, Error of Measurement, Least Squares Statistics

Direct link
