Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 5 |
Descriptor
| Test Format | 5 |
| Item Response Theory | 3 |
| Test Items | 2 |
| Academic Achievement | 1 |
| Accuracy | 1 |
| Adaptive Testing | 1 |
| Artificial Intelligence | 1 |
| Automation | 1 |
| Case Studies | 1 |
| Comparative Analysis | 1 |
| Computer Assisted Testing | 1 |
| More ▼ | |
Source
| Applied Measurement in… | 5 |
Author
| Ben Backes | 1 |
| Brian E. Clauser | 1 |
| Chunyan Liu | 1 |
| James Cowan | 1 |
| Janet Mee | 1 |
| Le An Ha | 1 |
| Lixin Yuan | 1 |
| Minqiang Zhang | 1 |
| Peter Baldwin | 1 |
| Raja Subhiyah | 1 |
| Richard A. Feinberg | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 5 |
| Reports - Research | 5 |
Education Level
| Elementary Education | 1 |
| Elementary Secondary Education | 1 |
| Grade 8 | 1 |
| Higher Education | 1 |
| Junior High Schools | 1 |
| Middle Schools | 1 |
| Postsecondary Education | 1 |
| Secondary Education | 1 |
Audience
Location
| Massachusetts | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| Massachusetts Comprehensive… | 1 |
What Works Clearinghouse Rating
Brian E. Clauser; Victoria Yaneva; Peter Baldwin; Le An Ha; Janet Mee – Applied Measurement in Education, 2024
Multiple-choice questions have become ubiquitous in educational measurement because the format allows for efficient and accurate scoring. Nonetheless, there remains continued interest in constructed-response formats. This interest has driven efforts to develop computer-based scoring procedures that can accurately and efficiently score these items.…
Descriptors: Computer Uses in Education, Artificial Intelligence, Scoring, Responses
Ben Backes; James Cowan – Applied Measurement in Education, 2024
We investigate two research questions using a recent statewide transition from paper to computer-based testing: first, the extent to which test mode effects found in prior studies can be eliminated; and second, the degree to which online and paper assessments offer different information about underlying student ability. We first find very small…
Descriptors: Computer Assisted Testing, Test Format, Differences, Academic Achievement
Chunyan Liu; Raja Subhiyah; Richard A. Feinberg – Applied Measurement in Education, 2024
Mixed-format tests that include both multiple-choice (MC) and constructed-response (CR) items have become widely used in many large-scale assessments. When an item response theory (IRT) model is used to score a mixed-format test, the unidimensionality assumption may be violated if the CR items measure a different construct from that measured by MC…
Descriptors: Test Format, Response Style (Tests), Multiple Choice Tests, Item Response Theory
Shaojie Wang; Won-Chan Lee; Minqiang Zhang; Lixin Yuan – Applied Measurement in Education, 2024
To reduce the impact of parameter estimation errors on IRT linking results, recent work introduced two information-weighted characteristic curve methods for dichotomous items. These two methods showed outstanding performance in both simulation and pseudo-form pseudo-group analysis. The current study expands upon the concept of information…
Descriptors: Item Response Theory, Test Format, Test Length, Error of Measurement
Yi-Hsuan Lee; Yue Jia – Applied Measurement in Education, 2024
Test-taking experience is a consequence of the interaction between students and assessment properties. We define a new notion, rapid-pacing behavior, to reflect two types of test-taking experience -- disengagement and speededness. To identify rapid-pacing behavior, we extend existing methods to develop response-time thresholds for individual items…
Descriptors: Adaptive Testing, Reaction Time, Item Response Theory, Test Format

Peer reviewed
Direct link
