Descriptor
| Multiple Choice Tests | 2 |
| Test Format | 2 |
| Test Items | 2 |
| Test Use | 2 |
| Architecture | 1 |
| Automation | 1 |
| Cognitive Processes | 1 |
| Computer Assisted Testing | 1 |
| Construct Validity | 1 |
| Constructed Response | 1 |
| Difficulty Level | 1 |
| More ▼ | |
Source
Author
| Martinez, Michael E. | 2 |
| Katz, Irvin R. | 1 |
Publication Type
| Reports - Evaluative | 2 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
| National Assessment of… | 1 |
What Works Clearinghouse Rating
Martinez, Michael E.; And Others – 1990
Large-scale testing is dominated by the multiple-choice question format. Widespread use of the format is due, in part, to the ease with which multiple-choice items can be scored automatically. This paper examines automatic scoring procedures for an alternative item type: figural response. Figural response items call for the completion or…
Descriptors: Automation, Computer Assisted Testing, Educational Technology, Multiple Choice Tests
Martinez, Michael E.; Katz, Irvin R. – 1992
Contrasts between constructed response items and stem-equivalent multiple-choice counterparts typically have involved averaging item characteristics, and this aggregation has masked differences in statistical properties at the item level. Moreover, even aggregated format differences have not been explained in terms of differential cognitive…
Descriptors: Architecture, Cognitive Processes, Construct Validity, Constructed Response


