Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 1 |
| Since 2007 (last 20 years) | 2 |
Descriptor
| Difficulty Level | 2 |
| Guidelines | 2 |
| Standard Setting | 2 |
| Cutting Scores | 1 |
| Decision Making | 1 |
| Driver Education | 1 |
| Error of Measurement | 1 |
| Evaluators | 1 |
| Generalization | 1 |
| High School Students | 1 |
| Item Analysis | 1 |
| More ▼ | |
Author
| Ascalon, M. Evelina | 1 |
| Clauser, Brian E. | 1 |
| Clauser, Jerome C. | 1 |
| Davis, Bruce W. | 1 |
| Kane, Michael | 1 |
| Meyers, Lawrence S. | 1 |
| Smits, Niels | 1 |
Publication Type
| Journal Articles | 2 |
| Reports - Evaluative | 1 |
| Reports - Research | 1 |
Education Level
| High Schools | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Clauser, Brian E.; Kane, Michael; Clauser, Jerome C. – Journal of Educational Measurement, 2020
An Angoff standard setting study generally yields judgments on a number of items by a number of judges (who may or may not be nested in panels). Variability associated with judges (and possibly panels) contributes error to the resulting cut score. The variability associated with items plays a more complicated role. To the extent that the mean item…
Descriptors: Cutting Scores, Generalization, Decision Making, Standard Setting
Ascalon, M. Evelina; Meyers, Lawrence S.; Davis, Bruce W.; Smits, Niels – Applied Measurement in Education, 2007
This article examined two item-writing guidelines: the format of the item stem and homogeneity of the answer set. Answering the call of Haladyna, Downing, and Rodriguez (2002) for empirical tests of item writing guidelines and extending the work of Smith and Smith (1988) on differential use of item characteristics, a mock multiple-choice driver's…
Descriptors: Guidelines, Difficulty Level, Standard Setting, Driver Education

Peer reviewed
Direct link
