NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Scherman, Vanessa; Howie, Sarah J.; Bosker, Roel J. – Educational Research and Evaluation, 2011
In information-rich environments, schools are often presented with a myriad of data from which decisions need to be made. The use of the information on a classroom level may be facilitated if performance could be described in terms of levels of proficiency or benchmarks. The aim of this article is to explore benchmarks using data from a monitoring…
Descriptors: Standard Setting, Foreign Countries, Grade 8, Ability
Burton, Nancy W.; And Others – 1976
Assessment exercises (items) in three different formats--multiple-choice with an "I don't know" (IDK) option, multiple-choice without the IDK, and open-ended--were placed at the beginning, middle and end of 45-minute assessment packages (instruments). A balanced incomplete blocks analysis of variance was computed to determine the biasing…
Descriptors: Age Differences, Difficulty Level, Educational Assessment, Guessing (Tests)
Pearson, P. David; Garavaglia, Diane R. – National Center for Education Statistics, 2003
The purpose of this essay is to explore both what is known and what needs to be learned about the information value of performance items "when they are used in large scale assessments." Within the context of the National Assessment of Educational Progress (NAEP), there is substantial motivation for answering these questions. Over the…
Descriptors: Measurement, National Competency Tests, Test Items, Performance
Peer reviewed Peer reviewed
Cohen, Jon; Snow, Stephanie – Journal of Educational Measurement, 2002
Studied the impact of changes in item difficulty on National Assessment of Educational Progress (NAEP) estimates over time through a Monte Carlo study that simulated the responses of 1990 NAEP mathematics respondents to 1990 and 1996 NAEP mathematics items. Results support the idea that these changes have not affected the NAEP trend line.…
Descriptors: Change, Difficulty Level, Estimation (Mathematics), Mathematics Tests
Rudner, Lawrence M.; And Others – 1995
Fit statistics provide a direct measure of assessment accuracy by analyzing the fit of measurement models to an individual's (or group's) response pattern. Students that lose interest during the assessment, for example, will miss exercises that are within their abilities. Such students will respond correctly to some more difficult items and…
Descriptors: Difficulty Level, Educational Assessment, Goodness of Fit, Measurement Techniques
Allen, Nancy L.; Donoghue, John R. – 1995
This Monte Carlo study examined the effect of complex sampling of items on the measurement of differential item functioning (DIF) using the Mantel-Haenszel procedure. Data were generated using a three-parameter logistic item response theory model according to the balanced incomplete block (BIB) design used in the National Assessment of Educational…
Descriptors: Computer Assisted Testing, Difficulty Level, Elementary Secondary Education, Identification
Martinez, Michael E. – 1990
In contrast to multiple-choice test questions, figural response items call for constructed responses and rely upon figural material, such as illustrations and graphs, as the response medium. Figural response questions in various science domains were created and administered to a sample of 347 fourth, 365 eighth, and 322 twelfth graders. Data were…
Descriptors: Comparative Analysis, Constructed Response, Difficulty Level, Elementary Education
Park, Chung; Allen, Nancy L. – 1994
This study is part of continuing research into the meaning of future National Assessment of Educational Progress (NAEP) science scales. In this study, the test framework, as examined by NAEP's consensus process, and attributes of the items, identified by science experts, cognitive scientists, and measurement specialists, are examined. Preliminary…
Descriptors: Communication (Thought Transfer), Comparative Analysis, Construct Validity, Content Validity