NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 21 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ponce-Renova, Hector F. – Journal of New Approaches in Educational Research, 2022
This paper's objective was to teach the Equivalence Testing applied to Educational Research to emphasize recommendations and to increase quality of research. Equivalence Testing is a technique used to compare effect sizes or means of two different studies to ascertain if they would be statistically equivalent. For making accessible Equivalence…
Descriptors: Educational Research, Effect Size, Statistical Analysis, Intervals
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2020
The What Works Clearinghouse (WWC) is an initiative of the U.S. Department of Education's Institute of Education Sciences (IES), which was established under the Education Sciences Reform Act of 2002. It is an important part of IES's strategy to use rigorous and relevant research, evaluation, and statistics to improve the nation's education system.…
Descriptors: Educational Research, Evaluation Methods, Evidence, Statistical Significance
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2017
The What Works Clearinghouse (WWC) systematic review process is the basis of many of its products, enabling the WWC to use consistent, objective, and transparent standards and procedures in its reviews, while also ensuring comprehensive coverage of the relevant literature. The WWC systematic review process consists of five steps: (1) Developing…
Descriptors: Educational Research, Evaluation Methods, Evidence, Statistical Significance
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dong, Nianbo; Lipsey, Mark – Society for Research on Educational Effectiveness, 2014
When randomized control trials (RCT) are not feasible, researchers seek other methods to make causal inference, e.g., propensity score methods. One of the underlined assumptions for the propensity score methods to obtain unbiased treatment effect estimates is the ignorability assumption, that is, conditional on the propensity score, treatment…
Descriptors: Educational Research, Benchmarking, Statistical Analysis, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Castellano, Katherine E.; Rabe-Hesketh, Sophia; Skrondal, Anders – Journal of Educational and Behavioral Statistics, 2014
Investigations of the effects of schools (or teachers) on student achievement focus on either (1) individual school effects, such as value-added analyses, or (2) school-type effects, such as comparisons of charter and public schools. Controlling for school composition by including student covariates is critical for valid estimation of either kind…
Descriptors: Hierarchical Linear Modeling, Context Effect, Economics, Educational Research
Saupe, Joe L.; Eimers, Mardy T. – Association for Institutional Research, 2013
The purpose of this paper is to explore differences in the reliabilities of cumulative college grade point averages (GPAs), estimated for unweighted and weighted, one-semester, 1-year, 2-year, and 4-year GPAs. Using cumulative GPAs for a freshman class at a major university, we estimate internal consistency (coefficient alpha) reliabilities for…
Descriptors: Grade Point Average, College Freshmen, Reliability, Comparative Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gleason, Philip M.; Resch, Alexandra M.; Berk, Jillian A. – National Center for Education Evaluation and Regional Assistance, 2012
This NCEE Technical Methods Paper compares the estimated impacts of an educational intervention using experimental and regression discontinuity (RD) study designs. The analysis used data from two large-scale randomized controlled trials--the Education Technology Evaluation and the Teach for America Study--to provide evidence on the performance of…
Descriptors: Regression (Statistics), Research Design, Comparative Analysis, Intervention
Cheema, Jehanzeb R. – Review of Educational Research, 2014
Missing data are a common occurrence in survey-based research studies in education, and the way missing values are handled can significantly affect the results of analyses based on such data. Despite known problems with performance of some missing data handling methods, such as mean imputation, many researchers in education continue to use those…
Descriptors: Educational Research, Data, Data Collection, Data Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Rhoads, Christopher H. – Journal of Educational and Behavioral Statistics, 2011
Experimental designs that randomly assign entire clusters of individuals (e.g., schools and classrooms) to treatments are frequently advocated as a way of guarding against contamination of the estimated average causal effect of treatment. However, in the absence of contamination, experimental designs that randomly assign intact clusters to…
Descriptors: Educational Research, Research Design, Effect Size, Experimental Groups
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2014
The 2011 study, "Benefits of Practicing 4 = 2 + 2: Nontraditional Problem Formats Facilitate Children's Understanding of Mathematical Equivalence," examined the effects of addition practice using nontraditional problem formats on students' understanding of mathematical equivalence. In nontraditional problem formats, operations appear on…
Descriptors: Mathematics Instruction, Elementary School Students, Addition, Teaching Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Fortson, Kenneth; Verbitsky-Savitz, Natalya; Kopa, Emma; Gleason, Philip – National Center for Education Evaluation and Regional Assistance, 2012
Randomized controlled trials (RCTs) are widely considered to be the gold standard in evaluating the impacts of a social program. When an RCT is infeasible, researchers often estimate program impacts by comparing outcomes of program participants with those of a nonexperimental comparison group, adjusting for observable differences between the two…
Descriptors: Charter Schools, Middle School Students, Educational Research, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Xu, Ting; Stone, Clement A. – Educational and Psychological Measurement, 2012
It has been argued that item response theory trait estimates should be used in analyses rather than number right (NR) or summated scale (SS) scores. Thissen and Orlando postulated that IRT scaling tends to produce trait estimates that are linearly related to the underlying trait being measured. Therefore, IRT trait estimates can be more useful…
Descriptors: Educational Research, Monte Carlo Methods, Measures (Individuals), Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2014
This "What Works Clearinghouse Procedures and Standards Handbook (Version 3.0)" provides a detailed description of the standards and procedures of the What Works Clearinghouse (WWC). The remaining chapters of this Handbook are organized to take the reader through the basic steps that the WWC uses to develop a review protocol, identify…
Descriptors: Educational Research, Guides, Intervention, Classification
Peer reviewed Peer reviewed
Christensen, Carol A.; Cooper, Tom J. – British Educational Research Journal, 1992
Presents results from an Australian study examining whether children who use cognitive strategies in solving simple addition questions develop greater proficiency in addition than children who do not use such strategies. Describes the subjects, instruments, procedure, and instructional treatment. Concludes that the development of cognitive…
Descriptors: Addition, Cognitive Development, Cognitive Processes, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Stuart, Elizabeth A. – Educational Researcher, 2007
Education researchers, practitioners, and policymakers alike are committed to identifying interventions that teach students more effectively. Increased emphasis on evaluation and accountability has increased desire for sound evaluations of these interventions; and at the same time, school-level data have become increasingly available. This article…
Descriptors: Research Methodology, Computation, Causal Models, Intervention
Previous Page | Next Page »
Pages: 1  |  2