Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 0 |
| Since 2007 (last 20 years) | 9 |
Descriptor
Source
Author
| Baxter, G.P. | 1 |
| Bell, Stephen H. | 1 |
| Bleeker, M.M. | 1 |
| Carini, Robert M. | 1 |
| Cruce, Ty | 1 |
| Dorans, Neil | 1 |
| Dorman, Jeffrey Paul | 1 |
| Gonyea, Robert M. | 1 |
| Gorman, Dennis M. | 1 |
| Hayek, John C. | 1 |
| Hojat, Mohammadreza | 1 |
| More ▼ | |
Publication Type
| Reports - Evaluative | 15 |
| Journal Articles | 8 |
| Speeches/Meeting Papers | 4 |
| Numerical/Quantitative Data | 1 |
Education Level
| Higher Education | 3 |
| Postsecondary Education | 3 |
| Elementary Secondary Education | 2 |
| Adult Education | 1 |
| Grade 4 | 1 |
| Grade 8 | 1 |
| Secondary Education | 1 |
Audience
Location
| Greece | 1 |
| Kentucky | 1 |
| Puerto Rico | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| SAT (College Admission Test) | 1 |
What Works Clearinghouse Rating
Tsamadias, Constantinos; Prontzas, Panagiotis – Education Economics, 2012
This paper examines the impact of education on economic growth in Greece over the period 1960-2000 by applying the model introduced by Mankiw, Romer, and Weil. The findings of the empirical analysis reveal that education had a positive and statistically significant effect on economic growth in Greece over the period 1960-2000. The econometric…
Descriptors: Job Training, Foreign Countries, Human Capital, Economic Progress
Kim, Eun Sook; Willson, Victor L. – Educational and Psychological Measurement, 2010
This article presents a method to evaluate pretest effects on posttest scores in the absence of an un-pretested control group using published results of pretesting effects due to Willson and Putnam. Confidence intervals around the expected theoretical gain due to pretesting are computed, and observed gains or differential gains are compared with…
Descriptors: Control Groups, Intervals, Educational Research, Educational Psychology
Moses, Tim; Miao, Jing; Dorans, Neil – Educational Testing Service, 2010
This study compared the accuracies of four differential item functioning (DIF) estimation methods, where each method makes use of only one of the following: raw data, logistic regression, loglinear models, or kernel smoothing. The major focus was on the estimation strategies' potential for estimating score-level, conditional DIF. A secondary focus…
Descriptors: Test Bias, Statistical Analysis, Computation, Scores
Gorman, Dennis M.; Huber, J. Charles, Jr. – Evaluation Review, 2009
This study explores the possibility that any drug prevention program might be considered "evidence-based" given the use of data analysis procedures that optimize the chance of producing statistically significant results by reanalyzing data from a Drug Abuse Resistance Education (DARE) program evaluation. The analysis produced a number of…
Descriptors: Program Evaluation, Drug Education, Prevention, Drug Abuse
Sexton, Thomas R. – Journal of Case Studies in Accreditation and Assessment, 2010
In the current economic climate, business schools face crucial decisions. As resources become scarcer, schools must either streamline operations or limit them. An efficiency analysis of U.S. business schools is presented that computes, for each business school, an overall efficiency score and provides separate factor efficiency scores, indicating…
Descriptors: Efficiency, Business Administration Education, Scores, Factor Analysis
Dorman, Jeffrey Paul – Educational Psychology, 2008
This paper discusses the effect of clustering on statistical tests and illustrates this effect using classroom environment data. Most classroom environment studies involve the collection of data from students nested within classrooms and the hierarchical nature to these data cannot be ignored. In particular, this paper studies the influence of…
Descriptors: Statistical Significance, Data Analysis, Classroom Environment, Error of Measurement
Puma, Michael J.; Olsen, Robert B.; Bell, Stephen H.; Price, Cristofer – National Center for Education Evaluation and Regional Assistance, 2009
This NCEE Technical Methods report examines how to address the problem of missing data in the analysis of data in Randomized Controlled Trials (RCTs) of educational interventions, with a particular focus on the common educational situation in which groups of students such as entire classrooms or schools are randomized. Missing outcome data are a…
Descriptors: Educational Research, Research Design, Research Methodology, Control Groups
Schochet, Peter Z. – National Center for Education Evaluation and Regional Assistance, 2008
This report presents guidelines for addressing the multiple comparisons problem in impact evaluations in the education area. The problem occurs due to the large number of hypothesis tests that are typically conducted across outcomes and subgroups in these studies, which can lead to spurious statistically significant impact findings. The…
Descriptors: Guidelines, Testing, Hypothesis Testing, Statistical Significance
Weigle, David C. – 1994
The purposes of the present paper are to address the historical development of statistical significance testing and to briefly examine contemporary practices regarding such testing in the light of these historical origins. Precursors leading to the advent of statistical significance testing are examined as are more recent controversies surrounding…
Descriptors: Data Analysis, Educational History, Etiology, Research Methodology
Baxter, G.P.; Bleeker, M.M.; Waits, T.L.; Salvucci, S. – National Center for Education Statistics, 2007
This report presents highlights of the results for fourth-and eighth-grade students in Puerto Rico for the 2003 and 2005 National Assessment of Educational Progress (NAEP) in mathematics. The NAEP mathematics assessment was administered to public school students in Puerto Rico for the first time in 2003. Although NAEP had previously administered…
Descriptors: Statistical Significance, Public Schools, National Competency Tests, Mathematics Achievement
Hojat, Mohammadreza; Xu, Gang – Advances in Health Sciences Education, 2004
Effect Sizes (ES) are an increasingly important index used to quantify the degree of practical significance of study results. This paper gives an introduction to the computation and interpretation of effect sizes from the perspective of the consumer of the research literature. The key points made are: (1) "ES" is a useful indicator of the…
Descriptors: Definitions, Statistical Significance, Effect Size, Correlation
Kulik, James A.; Kulik, Chen-Lin C. – 1990
The assumptions and consequences of applying conventional and newer statistical methods to meta-analytic data sets are reviewed. The application of the two approaches to a meta-analytic data set described by L. V. Hedges (1984) illustrates the differences. Hedges analyzed six studies of the effects of open education on student cooperation. The…
Descriptors: Analysis of Variance, Chi Square, Comparative Analysis, Data Analysis
Peer reviewedOsgood, D. Wayne; Smith, Gail L. – Evaluation Review, 1995
Strategies are presented for analyzing longitudinal research designs with many waves of data using hierarchical linear modeling. The approach defines well-focused parameters that yield meaningful effect size estimates and significance tests. It is illustrated with data from the Boys Town Follow-Up Study. (SLD)
Descriptors: Data Analysis, Effect Size, Estimation (Mathematics), Evaluation Methods
Pascarella, Ernest T.; Cruce, Ty; Umbach, Paul D.; Wolniak, Gregory C.; Kuh, George D.; Carini, Robert M.; Hayek, John C.; Gonyea, Robert M.; Zhao, Chun-Mei – Journal of Higher Education, 2006
Academic selectivity plays a dominant role in the public's understanding of what constitutes institutional excellence or quality in undergraduate education. In this study, we analyzed two independent data sets to estimate the net effect of three measures of college selectivity on dimensions of documented good practices in undergraduate education.…
Descriptors: College Instruction, Selective Admission, Undergraduate Study, Educational Quality
Thompson, Bruce – 1989
Although methodological integrity is not the sole determinant of the value of a program evaluation, decision-makers do have a right, at a minimum, to be able to expect competent work from evaluators. This paper explores five areas where evaluators might improve methodological practices. First, evaluation reports should reflect the limited…
Descriptors: Analysis of Covariance, Analysis of Variance, Data Analysis, Decision Making

Direct link
