Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 1 |
| Since 2007 (last 20 years) | 2 |
Descriptor
Source
| Applied Measurement in… | 5 |
Author
| Guo, Hongwen | 1 |
| Haberman, Shelby | 1 |
| Haladyna, Thomas A. | 1 |
| Haladyna, Thomas M. | 1 |
| Lam, Tony C. M. | 1 |
| Liu, Ou Lydia | 1 |
| Mills, Craig N. | 1 |
| Paek, Insu | 1 |
| Rios, Joseph A. | 1 |
| Rodriguez, Michael C. | 1 |
| Stevens, Craig | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 5 |
| Reports - Research | 3 |
| Reports - Evaluative | 2 |
Education Level
| Higher Education | 2 |
| Postsecondary Education | 2 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Haladyna, Thomas M.; Rodriguez, Michael C.; Stevens, Craig – Applied Measurement in Education, 2019
The evidence is mounting regarding the guidance to employ more three-option multiple-choice items. From theoretical analyses, empirical results, and practical considerations, such items are of equal or higher quality than four- or five-option items, and more items can be administered to improve content coverage. This study looks at 58 tests,…
Descriptors: Multiple Choice Tests, Test Items, Testing Problems, Guessing (Tests)
Guo, Hongwen; Rios, Joseph A.; Haberman, Shelby; Liu, Ou Lydia; Wang, Jing; Paek, Insu – Applied Measurement in Education, 2016
Unmotivated test takers using rapid guessing in item responses can affect validity studies and teacher and institution performance evaluation negatively, making it critical to identify these test takers. The authors propose a new nonparametric method for finding response-time thresholds for flagging item responses that result from rapid-guessing…
Descriptors: Guessing (Tests), Reaction Time, Nonparametric Statistics, Models
Peer reviewedLam, Tony C. M.; Stevens, Joseph J. – Applied Measurement in Education, 1994
Effects of the following three variables on rating scale response were studied: (1) polarization of opinion regarding scale content; (2) intensity of item wording; and (3) psychological width of the scale. Results with 167 college students suggest best ways to balance polarization and item wording regardless of scale width. (SLD)
Descriptors: College Students, Content Analysis, Higher Education, Rating Scales
Peer reviewedMills, Craig N.; Stocking, Martha L. – Applied Measurement in Education, 1996
Issues that must be addressed in the large-scale application of computerized adaptive testing are explored, including considerations of test design, scoring, test administration, item and item bank development, and other aspects of test construction. Possible solutions and areas in which additional work is needed are identified. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Elementary Secondary Education, Higher Education
Peer reviewedHaladyna, Thomas A. – Applied Measurement in Education, 1992
Several multiple-choice item formats are examined in the current climate of test reform. The reform movement is discussed as it affects use of the following formats: (1) complex multiple-choice; (2) alternate choice; (3) true-false; (4) multiple true-false; and (5) the context dependent item set. (SLD)
Descriptors: Cognitive Psychology, Comparative Testing, Context Effect, Educational Change

Direct link
