NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 15 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2020
The What Works Clearinghouse (WWC) is an initiative of the U.S. Department of Education's Institute of Education Sciences (IES), which was established under the Education Sciences Reform Act of 2002. It is an important part of IES's strategy to use rigorous and relevant research, evaluation, and statistics to improve the nation's education system.…
Descriptors: Educational Research, Evaluation Methods, Evidence, Statistical Significance
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Steiner, Peter M.; Wong, Vivian – Society for Research on Educational Effectiveness, 2016
Despite recent emphasis on the use of randomized control trials (RCTs) for evaluating education interventions, in most areas of education research, observational methods remain the dominant approach for assessing program effects. Over the last three decades, the within-study comparison (WSC) design has emerged as a method for evaluating the…
Descriptors: Randomized Controlled Trials, Comparative Analysis, Research Design, Evaluation Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2017
The What Works Clearinghouse (WWC) systematic review process is the basis of many of its products, enabling the WWC to use consistent, objective, and transparent standards and procedures in its reviews, while also ensuring comprehensive coverage of the relevant literature. The WWC systematic review process consists of five steps: (1) Developing…
Descriptors: Educational Research, Evaluation Methods, Evidence, Statistical Significance
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Citkowicz, Martyna; Hedges, Larry V. – Society for Research on Educational Effectiveness, 2013
In some instances, intentionally or not, study designs are such that there is clustering in one group but not in the other. This paper describes methods for computing effect size estimates and their variances when there is clustering in only one group and the analysis has not taken that clustering into account. The authors provide the effect size…
Descriptors: Multivariate Analysis, Effect Size, Sampling, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Phillips, Gary W. – Applied Measurement in Education, 2015
This article proposes that sampling design effects have potentially huge unrecognized impacts on the results reported by large-scale district and state assessments in the United States. When design effects are unrecognized and unaccounted for they lead to underestimating the sampling error in item and test statistics. Underestimating the sampling…
Descriptors: State Programs, Sampling, Research Design, Error of Measurement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2014
This "What Works Clearinghouse Procedures and Standards Handbook (Version 3.0)" provides a detailed description of the standards and procedures of the What Works Clearinghouse (WWC). The remaining chapters of this Handbook are organized to take the reader through the basic steps that the WWC uses to develop a review protocol, identify…
Descriptors: Educational Research, Guides, Intervention, Classification
Taylor, Dianne L. – 1991
As significance testing comes under increasing criticism, some researchers are turning to other indices to evaluate their findings. Included among the alternative options are the interpretation of effect size estimates and the evaluation of sample specificity (invariance testing). Using a hypothetical data set of 64 cases and two predictor…
Descriptors: Discriminant Analysis, Effect Size, Evaluation Methods, Sample Size
Peer reviewed Peer reviewed
Cahan, Sorel – Educational Researcher, 2000
Shows why the two-step approach proposed by D. Robinson and J. Levine (1997) is inappropriate for the evaluation of empirical results and reiterates the preferred approach of increased sample size and the computation of confidence intervals. (SLD)
Descriptors: Effect Size, Evaluation Methods, Research Methodology, Sample Size
Peer reviewed Peer reviewed
Levin, Joel R.; Robinson, Daniel H. – Educational Researcher, 2000
Supports a two-step approach to the estimation and discussion of effect sizes, making a distinction between single-study decision-oriented research and multiple-study synthesis. Introduces and illustrates the concept of "conclusion coherence." (Author/SLD)
Descriptors: Effect Size, Evaluation Methods, Research Methodology, Sample Size
Peer reviewed Peer reviewed
Schneider, Anne L.; Darcy, Robert E. – Evaluation Review, 1984
The normative implications of applying significance tests in evaluation research are examined. The authors conclude that evaluators often make normative decisions, based on the traditional .05 significance level in studies with small samples. Additional reporting of the magnitude of impact, the significance level, and the power of the test is…
Descriptors: Evaluation Methods, Hypothesis Testing, Research Methodology, Research Problems
Peer reviewed Peer reviewed
Arvey, Richard D.; And Others – Personnel Psychology, 1985
Investigates sample size requirements needed to achieve various levels of statistical power using posttest-only, gain-score, and analysis of covariance designs in evaluating training interventions. Results indicate the power to detect true effects differ according to type of design, correlation between pre- and posttest, and size of effect due to…
Descriptors: Correlation, Evaluation Methods, Power (Statistics), Research Design
Palomares, Ronald S. – 1990
Researchers increasingly recognize that significance tests are limited in their ability to inform scientific practice. Common errors in interpreting significance tests and three strategies for augmenting the interpretation of significance test results are illustrated. The first strategy for augmenting the interpretation of significance tests…
Descriptors: Effect Size, Estimation (Mathematics), Evaluation Methods, Research Design
Ciechalski, Joseph C.; Pinkney, James W.; Weaver, Florence S. – 2002
This paper illustrates the use of the McNemar Test, using a hypothetical problem. The McNemar Test is a nonparametric statistical test that is a type of chi square test using dependent, rather than independent, samples to assess before-after designs in which each subject is used as his or her own control. Results of the McNemar test make it…
Descriptors: Attitude Change, Chi Square, Evaluation Methods, Nonparametric Statistics
Thompson, Bruce – 1992
Three criticisms of overreliance on results from statistical significance tests are noted. It is suggested that: (1) statistical significance tests are often tautological; (2) some uses can involve comparisons that are not completely sensible; and (3) using statistical significance tests to evaluate both methodological assumptions (e.g., the…
Descriptors: Effect Size, Estimation (Mathematics), Evaluation Methods, Regression (Statistics)
Shapiro, Jonathan – 1979
A statistical definition of information utilization for policy making decisions and an evaluation impact test to determine its occurrence are proposed. A univariate time series analysis is used to identify the internal trend for a given policy output variable and to control its effect. Two problems are identified in implementing an evaluation…
Descriptors: Decision Making, Evaluation Methods, Goodness of Fit, Information Utilization