ERIC Number: EJ1002208
Record Type: Journal
Publication Date: 2013
Pages: 5
Abstractor: ERIC
ISBN: N/A
ISSN: ISSN-1536-6367
EISSN: N/A
Available Date: N/A
Why Lessons Learned from the Past Require Haertel's Expanded Scope for Test Validation
Shepard, Lorrie A.
Measurement: Interdisciplinary Research and Perspectives, v11 n1-2 p50-54 2013
In his article, Haertel (this issue) asks a fundamental question about how use of a test is expected to cause improvements in the educational system and in learning. He also considers how test validity should be investigated and argues for a more expansive view of validity that does not stop with scoring or generalization (the more technical and familiar of Kane's [2006] four stages of interpretive arguments) or even with extrapolation. Rather, "a full consideration of validity requires that the interpretive argument be carried all the way through to the end," to the stage four questions of how test scores are actually used. The author applauds Haertel's assertion that testing effects should be the concern of measurement professionals and would argue further that very often testing consequences can be linked directly to the substantive adequacy of the test itself, especially to the extrapolation link in the interpretive argument. Research evidence about present-day tests and their distorting effects on teaching and learning has been repeated ad nauseam. Lessons learned from this literature bear repeating, however, for several reasons. First, the problems have not yet been solved. Second, past experience with both intended and unintended testing effects provides an important list of things to watch for in new validity studies. Finally, and most importantly, there are many newcomers to the testing, learning, and learning-verification enterprise who have no experience with the failure of tests to stand in as proxies for the real thing and certainly no experience with when this substitution leads to reasonable inferences and when it does not. Because of term limits, policy makers today have no idea that they are making the exact same claims that were made 20 years ago, nor do they know why the promises of test-driven accountability did not come true. Measurement is, by definition, about quantification. But what Haertel's indirect uses of tests remind measurement professionals is that testing is also about representation and signaling of what learning and achievement are taken to mean. Haertel asks that measurement experts take greater responsibility for the indirect as well as direct effects of testing--not for crazy departures from stated goals when a testing program was launched, but for those very claims that were intended. Indeed, most often testing programs begin with a promise to improve schooling. He asks that this claim and implied theory of action be made explicit and tackled formally as part of validity research. The author agrees. (Contains 1 footnote.)
Descriptors: Educational Testing, Test Validity, Test Results, Test Construction, Curriculum Design, Formative Evaluation
Psychology Press. Available from: Taylor & Francis, Ltd. 325 Chestnut Street Suite 800, Philadelphia, PA 19106. Tel: 800-354-1420; Fax: 215-625-2940; Web site: http://www.tandf.co.uk/journals
Publication Type: Journal Articles; Opinion Papers
Education Level: Elementary Secondary Education
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Grant or Contract Numbers: N/A
Author Affiliations: N/A