Descriptor
| Computer Science | 2 |
| Test Format | 2 |
| Algebra | 1 |
| Architecture | 1 |
| Automation | 1 |
| Chi Square | 1 |
| College Entrance Examinations | 1 |
| Comparative Testing | 1 |
| Construct Validity | 1 |
| Constructed Response | 1 |
| Educational Technology | 1 |
| More ▼ | |
Author
| Bennett, Randy Elliot | 2 |
| Martinez, Michael E. | 1 |
Publication Type
| Journal Articles | 2 |
| Information Analyses | 1 |
| Reports - Research | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
| Advanced Placement… | 1 |
What Works Clearinghouse Rating
Peer reviewedMartinez, Michael E.; Bennett, Randy Elliot – Applied Measurement in Education, 1992
New developments in the use of automatically scorable constructed response item types for large-scale assessment are reviewed for five domains: (1) mathematical reasoning; (2) algebra problem solving; (3) computer science; (4) architecture; and (5) natural language. Ways in which these technologies are likely to shape testing are considered. (SLD)
Descriptors: Algebra, Architecture, Automation, Computer Science
Peer reviewedBennett, Randy Elliot; And Others – Journal of Educational Measurement, 1991
The relationship of multiple-choice and free-response items on the College Board's Advanced Placement Computer Science Examination was studied using confirmatory factor analysis. Results with 2 samples of 1,000 high school students suggested that the most parsimonious fit was achieved using a single factor. Implications for construct validity are…
Descriptors: Chi Square, College Entrance Examinations, Comparative Testing, Computer Science


