NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers1
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing all 12 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Feng; Wang, Xi; He, Xiaona; Cheng, Liang; Wang, Yiyu – Education and Information Technologies, 2022
This study adopted a meta-analysis to explore the effectiveness of unplugged activities (UA) and programming exercises (PE) teaching approaches on computational thinking (CT) education. Through a two-stage literature collection and selection process, 29 articles were included in the meta-analysis, 31 independent effect sizes (16 of UA and 15 of…
Descriptors: Instructional Effectiveness, Learning Activities, Programming, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Zheng, Binbin; Warschauer, Mark; Lin, Chin-Hsi; Chang, Chi – Review of Educational Research, 2016
Over the past decade, the number of one-to-one laptop programs in schools has steadily increased. Despite the growth of such programs, there is little consensus about whether they contribute to improved educational outcomes. This article reviews 65 journal articles and 31 doctoral dissertations published from January 2001 to May 2015 to examine…
Descriptors: Meta Analysis, Laptop Computers, Access to Computers, Computer Uses in Education
Roschelle, Jeremy; Murphy, Robert; Feng, Mingyu; Bakia, Marianne – Grantee Submission, 2017
In a rigorous evaluation of ASSISTments as an online homework support conducted in the state of Maine, SRI International reported that "the intervention significantly increased student scores on an end-of-the-year standardized mathematics assessment as compared with a control group that continued with existing homework practices."…
Descriptors: Homework, Program Effectiveness, Effect Size, Cost Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Brandon, Paul R.; Harrison, George M.; Lawton, Brian E. – American Journal of Evaluation, 2013
When evaluators plan site-randomized experiments, they must conduct the appropriate statistical power analyses. These analyses are most likely to be valid when they are based on data from the jurisdictions in which the studies are to be conducted. In this method note, we provide software code, in the form of a SAS macro, for producing statistical…
Descriptors: Statistical Analysis, Correlation, Effect Size, Benchmarking
Peer reviewed Peer reviewed
Direct linkDirect link
Ye, Meng; Xin, Tao – Educational and Psychological Measurement, 2014
The authors explored the effects of drifting common items on vertical scaling within the higher order framework of item parameter drift (IPD). The results showed that if IPD occurred between a pair of test levels, the scaling performance started to deviate from the ideal state, as indicated by bias of scaling. When there were two items drifting…
Descriptors: Scaling, Test Items, Equated Scores, Achievement Gains
Deke, John; Dragoset, Lisa – Mathematica Policy Research, Inc., 2012
The regression discontinuity design (RDD) has the potential to yield findings with causal validity approaching that of the randomized controlled trial (RCT). However, Schochet (2008a) estimated that, on average, an RDD study of an education intervention would need to include three to four times as many schools or students as an RCT to produce…
Descriptors: Research Design, Elementary Secondary Education, Regression (Statistics), Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Schochet, Peter Z.; Chiang, Hanley S. – Journal of Educational and Behavioral Statistics, 2011
In randomized control trials (RCTs) in the education field, the complier average causal effect (CACE) parameter is often of policy interest, because it pertains to intervention effects for students who receive a meaningful dose of treatment services. This article uses a causal inference and instrumental variables framework to examine the…
Descriptors: Computation, Identification, Educational Research, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Han, Bing; Dalal, Siddhartha R.; McCaffrey, Daniel F. – Journal of Educational and Behavioral Statistics, 2012
There is widespread interest in using various statistical inference tools as a part of the evaluations for individual teachers and schools. Evaluation systems typically involve classifying hundreds or even thousands of teachers or schools according to their estimated performance. Many current evaluations are largely based on individual estimates…
Descriptors: Statistical Inference, Error of Measurement, Classification, Statistical Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Schochet, Peter Z.; Puma, Mike; Deke, John – National Center for Education Evaluation and Regional Assistance, 2014
This report summarizes the complex research literature on quantitative methods for assessing how impacts of educational interventions on instructional practices and student learning differ across students, educators, and schools. It also provides technical guidance about the use and interpretation of these methods. The research topics addressed…
Descriptors: Statistical Analysis, Evaluation Methods, Educational Research, Intervention
Peer reviewed Peer reviewed
Direct linkDirect link
Codding, Robin S.; Hilt-Panahon, Alexandra; Panahon, Carlos J.; Benson, Jaime L. – Education and Treatment of Children, 2009
In order for school professionals to make informed decisions about appropriate interventions, it is imperative that they are informed as to what is available to aid students. The purpose of the present literature review was to examine specific interventions that could be employed with students identified as needing additional support in…
Descriptors: Literature Reviews, Intervention, Mathematics Instruction, Individualized Instruction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Olsen, Robert B.; Unlu, Fatih; Price, Cristofer; Jaciw, Andrew P. – National Center for Education Evaluation and Regional Assistance, 2011
This report examines the differences in impact estimates and standard errors that arise when these are derived using state achievement tests only (as pre-tests and post-tests), study-administered tests only, or some combination of state- and study-administered tests. State tests may yield different evaluation results relative to a test that is…
Descriptors: Achievement Tests, Standardized Tests, State Standards, Reading Achievement
Burke, Mary A.; Sass, Tim R. – Federal Reserve Bank of Boston, 2008
In this paper we analyze the impact of classroom peers on individual student performance with a unique longitudinal data set covering all Florida public school students in grades 3-10 over a five-year period. Unlike many previous data sets used to study peer effects in education, our data set allow us to identify each member of a given student's…
Descriptors: Teacher Effectiveness, Student Evaluation, Academic Achievement, Peer Groups