ERIC Number: ED545896
Record Type: Non-Journal
Publication Date: 2003-Apr
Pages: 48
Abstractor: ERIC
ISBN: N/A
ISSN: N/A
EISSN: N/A
Available Date: N/A
NAEP Validity Studies: Improving the Information Value of Performance Items in Large Scale Assessments. Working Paper No. 2003-08
Pearson, P. David; Garavaglia, Diane R.
National Center for Education Statistics
The purpose of this essay is to explore both what is known and what needs to be learned about the information value of performance items "when they are used in large scale assessments." Within the context of the National Assessment of Educational Progress (NAEP), there is substantial motivation for answering these questions. Over the past decade, in order to adequately portray the breadth and depth of important curriculum standards, NAEP designers have invested substantial time and energy in creating extended constructed-response items and enormous financial resources in scoring these items. While these items are popular with curriculum experts within various content areas (Linn, Glaser, & Bohrnstedt, 1997), it is not clear whether they possess the marginal utility required to justify their cost; that is, they may not provide new information above and beyond that which is provided by a more standard mix of multiple-choice and short items with known measurement characteristics and much more economical scoring protocols. Even worse, there is some reason to believe, based on research in non-NAEP settings, that extended constructed-response items may provide negative returns in terms of the overall goal of accurate measurement of performance on some broad domain such as reading, mathematics, or science (see Forsyth, Hambleton, Linn, Mislevy, and Yen, 1996). To investigate this claim, the authors traversed the "information value" terrain along as many paths as they could find--combing the measurement literature to determine the various ways in which scholars have conceptualized and operationalized the "information value" construct, reviewing research conducted within those approaches, consulting essays that emphasize the importance of strong conceptual grounding in content frameworks, studying the broadest available construal of the construct of validity (e.g., Messick, 1989), and, finally, and most unsatisifyingly, attempting to determine, at a conceptual and philosophical level, what the assessment community means when they talk about the information provided by assessments. This essay is organized into four sections. First, the authors consider the construct of information value in its broadest philosophical sense and then describe the classical ways of operationalizing it. Second, they review the available literature within each of these operational traditions (IRT, factor analysis, correlational studies). Third, they consider some alternative versions of information value, based more on cognitive, conceptual, and pragmatic considerations. Finally, they outline a series of studies that they believe ought to be supported by the National Center for Educational Statistics (NCES) in order to answer the question of how NAEP can be modified to increase its "information value."
Descriptors: Measurement, National Competency Tests, Test Items, Performance, Responses, Item Response Theory, Factor Analysis, Correlation, Test Interpretation, Research Problems, Educational Research, Multiple Choice Tests, Scoring Rubrics, Cognitive Processes, Difficulty Level, Scoring, Test Bias, Reading Tests
National Center for Education Statistics. Available from: ED Pubs. P.O. Box 1398, Jessup, MD 20794-1398. Tel: 877-433-7827; Web site: http://nces.ed.gov/
Publication Type: Information Analyses; Reports - Evaluative
Education Level: Elementary Secondary Education
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: National Center for Education Statistics (ED)
Identifiers - Assessments and Surveys: National Assessment of Educational Progress
IES Funded: Yes
Grant or Contract Numbers: N/A
Author Affiliations: N/A