NotesFAQContact Us
Collection
Advanced
Search Tips
Back to results
ERIC Number: ED542761
Record Type: Non-Journal
Publication Date: 2012-Feb
Pages: 80
Abstractor: ERIC
ISBN: N/A
ISSN: N/A
EISSN: N/A
Available Date: N/A
Growth Model Comparison Study: Practical Implications of Alternative Models for Evaluating School Performance
Goldschmidt, Pete; Choi, Kilchan; Beaudoin, J.P.
Council of Chief State School Officers
The Elementary and Secondary Education Act (ESEA) has had several tangible effects on education and the monitoring of education. There have been both intended and unintended consequences. ESEA's newer generation of federal programs, such as Race to the Top, and the recent ESEA flexibility guidelines, have continued to push development of methods to accurately and fairly monitor school (and more recently teacher) performance. The purpose of this study is to compare several different growth models and examine empirical characteristics of each. This study differs from previous research comparing various models for accountability purposes in that the focus is broader--it is based on large scale assessment results from four states (Delaware, Hawaii, North Carolina, and Wisconsin) across two cohorts of students (each with three consecutive years of assessment results), and explicitly considers model results with respect to elementary and middle schools. This study addresses the following research questions regarding the performance of the different growth models: (1) Overall, does the model matter?; (2) Do different models lead to different inferences about schools?; (3) How accurately do models classify schools into performance categories?; (4) Are models consistent in classifying schools from one year to the next?; (5) How are models influenced by school intake characteristics (percent ELL, FRL, etc.)?; (6) Do models perform similarly for elementary and middle schools?; and (7) Do models behave similarly across states? The results of these analyses confirm that no single model can unequivocally be assumed to provide the best results. This is not possible for two reasons: one, different models address different questions about schools; and two, the empirical results indicate that context matters when examining models. By context the authors mean that the state in which the model will be run affects how the model may work. State affects include several pieces that are confounded. These include tests scales, testing procedures, student characteristics, and school characteristics. An accountability model should not be unduly influenced by factors outside of schools' control and models clearly differ in this respect. Distinguishing between a school's ability to facilitate learning and a school's performance as a function of advantageous (or challenging) student enrollment characteristics is where statistical machinery provides its biggest benefit. Appended are: (1) Data Quality; (2) Data Elements for Each State; and (3) Detailed Results of Overall Model Impact. (Contains 7 figures, 18 tables and 26 footnotes.) [This paper was commissioned by the Technical Issues in Large-Scale Assessment (TILSA) and the State Collaborative on Assessment and Student Standards (SCASS).]
Council of Chief State School Officers. One Massachusetts Avenue NW Suite 700, Washington, DC 20001. Tel: 202-336-7016; Fax: 202-408-8072; e-mail: pubs@ccsso.org; Web site: http://www.ccsso.org
Publication Type: Reports - Research
Education Level: Elementary Education; Middle Schools
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: Council of Chief State School Officers
Identifiers - Location: Delaware; Hawaii; North Carolina; Wisconsin
Identifiers - Laws, Policies, & Programs: Elementary and Secondary Education Act; Race to the Top
Grant or Contract Numbers: N/A
IES Cited: ED563445
Author Affiliations: N/A