Descriptor
| Test Construction | 2 |
| Test Format | 2 |
| Test Items | 2 |
| Algebra | 1 |
| Architecture | 1 |
| Automation | 1 |
| Classification | 1 |
| Computer Science | 1 |
| Constructed Response | 1 |
| Educational Technology | 1 |
| Literature Reviews | 1 |
| More ▼ | |
Source
| Applied Measurement in… | 2 |
Publication Type
| Information Analyses | 2 |
| Journal Articles | 2 |
| Reports - Evaluative | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Peer reviewedHaladyna, Thomas M.; Downing, Steven M. – Applied Measurement in Education, 1989
Results of 96 theoretical/empirical studies were reviewed to see if they support a taxonomy of 43 rules for writing multiple-choice test items. The taxonomy is the result of an analysis of 46 textbooks dealing with multiple-choice item writing. For nearly half of the rules, no research was found. (SLD)
Descriptors: Classification, Literature Reviews, Multiple Choice Tests, Test Construction
Peer reviewedMartinez, Michael E.; Bennett, Randy Elliot – Applied Measurement in Education, 1992
New developments in the use of automatically scorable constructed response item types for large-scale assessment are reviewed for five domains: (1) mathematical reasoning; (2) algebra problem solving; (3) computer science; (4) architecture; and (5) natural language. Ways in which these technologies are likely to shape testing are considered. (SLD)
Descriptors: Algebra, Architecture, Automation, Computer Science


