NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers1
Location
Delaware1
Laws, Policies, & Programs
Elementary and Secondary…2
Assessments and Surveys
Comprehensive Tests of Basic…1
What Works Clearinghouse Rating
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Zachary K. Collier; Minji Kong; Olushola Soyoye; Kamal Chawla; Ann M. Aviles; Yasser Payne – Journal of Educational and Behavioral Statistics, 2024
Asymmetric Likert-type items in research studies can present several challenges in data analysis, particularly concerning missing data. These items are often characterized by a skewed scaling, where either there is no neutral response option or an unequal number of possible positive and negative responses. The use of conventional techniques, such…
Descriptors: Likert Scales, Test Items, Item Analysis, Evaluation Methods
Hughes, Katherine L.; Miller, Trey; Reese, Kelly – Grantee Submission, 2021
This report from the Career and Technical Education (CTE) Research Network Lead team provides final results from an evaluability assessment of CTE programs that feasibly could be evaluated using a rigorous experimental design. Evaluability assessments (also called feasibility studies) are used in education and other fields, such as international…
Descriptors: Program Evaluation, Vocational Education, Evaluation Methods, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Wing, Coady; Bello-Gomez, Ricardo A. – American Journal of Evaluation, 2018
Treatment effect estimates from a "regression discontinuity design" (RDD) have high internal validity. However, the arguments that support the design apply to a subpopulation that is narrower and usually different from the population of substantive interest in evaluation research. The disconnect between RDD population and the…
Descriptors: Regression (Statistics), Research Design, Validity, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Rhodes, William – Evaluation Review, 2012
Research synthesis of evaluation findings is a multistep process. An investigator identifies a research question, acquires the relevant literature, codes findings from that literature, and analyzes the coded data to estimate the average treatment effect and its distribution in a population of interest. The process of estimating the average…
Descriptors: Social Sciences, Regression (Statistics), Meta Analysis, Models
Peer reviewed Peer reviewed
Asher, William; Hynes, Kevin – Journal of Experimental Education, 1982
An evaluation of open education was shown to produce misleading results due to probable regression phenomena. These questionable results are now spread throughout the literature of education, sociology, and psychology. Researchers are advised to review, not merely summarize, prior articles. (Author/PN)
Descriptors: Data Analysis, Evaluation Methods, Open Education, Regression (Statistics)
Peer reviewed Peer reviewed
Gillespie, David F.; Streeter, Calvin L. – Social Work Research, 1994
Discusses problems in analyzing change in nonexperimental data. Tests three ordinary least-squares regression models to illustrate similarities/differences. Notes that model based on raw difference change scores applies best to studying change processes; model based on outcome scores applies best to assessing consequences of change; and model…
Descriptors: Change, Data Analysis, Evaluation Methods, Least Squares Statistics
Thompson, Bruce – 1992
Three criticisms of overreliance on results from statistical significance tests are noted. It is suggested that: (1) statistical significance tests are often tautological; (2) some uses can involve comparisons that are not completely sensible; and (3) using statistical significance tests to evaluate both methodological assumptions (e.g., the…
Descriptors: Effect Size, Estimation (Mathematics), Evaluation Methods, Regression (Statistics)
Peer reviewed Peer reviewed
Trochim, William M.K. – Evaluation Review, 1982
Meta-analysis of Title I program evaluations shows the norm-referenced model overestimates positive effectiveness; while the regression-discontinuity design underestimates it. Potential biases include residual regression artifacts, attrition and time-of-testing problems in the norm-referenced design, and assignment, measurement, and data…
Descriptors: Compensatory Education, Data Collection, Elementary Secondary Education, Evaluation Methods
Echternacht, Gary; Swinton, Spencer – 1979
Title I evaluations using the RMC Model C design depend for their interpretation on the assumption that the regression of posttest on pretest is linear across the cut score level when there is no treatment; but there are many instances where nonlinearities may occur. If one applies the analysis of covariance, or model C analysis, large errors may…
Descriptors: Achievement Gains, Analysis of Covariance, Educational Assessment, Elementary Secondary Education
Mandeville, Garrett K.; Anderson, Lorin W. – 1986
School effectiveness indices (SEIs), based on regressing test performance onto earlier test performance and a socioeconomic status measure, were obtained for eight subject-grade combinations from 485 South Carolina elementary schools. The analysis involved school means based on longitudinally matched student data. Reading and mathematics…
Descriptors: Academic Achievement, Achievement Rating, Achievement Tests, Age Differences