Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 2 |
| Since 2007 (last 20 years) | 3 |
Descriptor
Source
| Journal of Applied Testing… | 3 |
Author
| Dupray, Laurence M. | 1 |
| Hou, Xiaodong | 1 |
| Kosh, Audra E. | 1 |
| Lissitz, Robert W. | 1 |
| Slater, Sharon Cadman | 1 |
| Soland, James | 1 |
| Wise, Steven L. | 1 |
Publication Type
| Journal Articles | 3 |
| Reports - Research | 2 |
| Reports - Descriptive | 1 |
Education Level
| Elementary Secondary Education | 3 |
| Early Childhood Education | 1 |
| Elementary Education | 1 |
| High Schools | 1 |
| Kindergarten | 1 |
| Primary Education | 1 |
| Secondary Education | 1 |
Audience
Location
| Maryland | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| Measures of Academic Progress | 1 |
What Works Clearinghouse Rating
Wise, Steven L.; Soland, James; Dupray, Laurence M. – Journal of Applied Testing Technology, 2021
Technology-Enhanced Items (TEIs) have been purported to be more motivating and engaging to test takers than traditional multiple-choice items. The claim of enhanced engagement, however, has thus far received limited research attention. This study examined the rates of rapid-guessing behavior received by three types of items (multiple-choice,…
Descriptors: Test Items, Guessing (Tests), Multiple Choice Tests, Achievement Tests
Kosh, Audra E. – Journal of Applied Testing Technology, 2021
In recent years, Automatic Item Generation (AIG) has increasingly shifted from theoretical research to operational implementation, a shift raising some unforeseen practical challenges. Specifically, generating high-quality answer choices presents several challenges such as ensuring that answer choices blend in nicely together for all possible item…
Descriptors: Test Items, Multiple Choice Tests, Decision Making, Test Construction
Lissitz, Robert W.; Hou, Xiaodong; Slater, Sharon Cadman – Journal of Applied Testing Technology, 2012
This article investigates several questions regarding the impact of different item formats on measurement characteristics. Constructed response (CR) items and multiple choice (MC) items obviously differ in their formats and in the resources needed to score them. As such, they have been the subject of considerable discussion regarding the impact of…
Descriptors: Computer Assisted Testing, Scoring, Evaluation Problems, Psychometrics

Peer reviewed
Direct link
