ERIC Number: ED579058
Record Type: Non-Journal
Publication Date: 2012-Aug-10
Pages: 58
Abstractor: ERIC
ISBN: N/A
ISSN: N/A
EISSN: N/A
Available Date: N/A
Does the Model Matter? Exploring the Relationship between Different Student Achievement-Based Teacher Assessments. CEDR Working Paper. WP #2012-6
Goldhaber, Dan; Walch, Joe; Gabele, Brian
Center for Education Data & Research
Policymakers appear increasingly inclined to utilize measures of student achievement, often state assessment results, to inform high-stakes teacher personnel decisions. This has been spurred on by the federal government's Teacher Incentive Fund (TIF) and Race to the Top (RttT) grant programs, each of which urge states and localities to tie teacher performance to compensation, renewal, and tenure. While conceptually simple, the idea of using student outcomes as a teacher performance measure is complex to implement for a variety of reasons, not least of which is the fact that there is no universally agreed upon statistical methodology for translating student achievement measures into teacher performance. This report uses statewide data from North Carolina to evaluate different methodologies for translating student achievement results into teacher performance. In particular, the report focuses on the extent to which there are differences in teacher effect estimates generated from different modeling approaches and to what extent classroom level characteristics predict these differences. Findings are consistent with research that finds models including student background and classroom characteristics are highly correlated with simpler specifications that only include a single-subject lagged test score, while value-added models (VAMs) estimated with school or student fixed effects have a lower correlation. Interestingly, teacher effectiveness estimates based on median student growth percentiles are highly correlated with estimates from VAMs that include only a lagged test score and those that also include lagged scores and student background characteristics, despite the fact that the two methods for estimating teacher effectiveness are, at least conceptually, quite different. However, even when the correlations between job performance estimates generated by different models are quite high, differences in the composition of students in teachers' classrooms can have sizable effects on the differences in their effectiveness estimates.
Descriptors: Models, Academic Achievement, Teacher Evaluation, Performance Based Assessment, Evaluation Methods, Predictor Variables, Correlation, Scores, Value Added Models, Student Characteristics, Teacher Effectiveness, Statistical Analysis, Regression (Statistics)
Center for Education Data & Research. 3876 Bridge Way North Suite 201, Seattle, WA 98103. Tel: 206-547-5585; Fax: 206-547-1641; e-mail: cedr@uw.edu; Web site: http://www.cedr.us
Publication Type: Reports - Research
Education Level: N/A
Audience: N/A
Language: English
Sponsor: Spencer Foundation
Authoring Institution: Center for Education Data & Research (CEDR)
Identifiers - Location: North Carolina
Grant or Contract Numbers: N/A
IES Cited: ED576984
Author Affiliations: N/A