Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 2 |
Descriptor
| Medical Students | 2 |
| Test Format | 2 |
| Artificial Intelligence | 1 |
| Automation | 1 |
| Computer Uses in Education | 1 |
| High Stakes Tests | 1 |
| Multiple Choice Tests | 1 |
| Responses | 1 |
| Scoring | 1 |
| Test Items | 1 |
Author
| Brian E. Clauser | 2 |
| Janet Mee | 2 |
| Peter Baldwin | 2 |
| Amy Morales | 1 |
| Justin Wolczynski | 1 |
| Le An Ha | 1 |
| Miguel Paniagua | 1 |
| Polina Harik | 1 |
| Ravi Pandian | 1 |
| Victoria Yaneva | 1 |
Publication Type
| Journal Articles | 2 |
| Reports - Research | 2 |
Education Level
| Higher Education | 2 |
| Postsecondary Education | 2 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Janet Mee; Ravi Pandian; Justin Wolczynski; Amy Morales; Miguel Paniagua; Polina Harik; Peter Baldwin; Brian E. Clauser – Advances in Health Sciences Education, 2024
Recent advances in automated scoring technology have made it practical to replace multiple-choice questions (MCQs) with short-answer questions (SAQs) in large-scale, high-stakes assessments. However, most previous research comparing these formats has used small examinee samples testing under low-stakes conditions. Additionally, previous studies…
Descriptors: Multiple Choice Tests, High Stakes Tests, Test Format, Test Items
Brian E. Clauser; Victoria Yaneva; Peter Baldwin; Le An Ha; Janet Mee – Applied Measurement in Education, 2024
Multiple-choice questions have become ubiquitous in educational measurement because the format allows for efficient and accurate scoring. Nonetheless, there remains continued interest in constructed-response formats. This interest has driven efforts to develop computer-based scoring procedures that can accurately and efficiently score these items.…
Descriptors: Computer Uses in Education, Artificial Intelligence, Scoring, Responses

Peer reviewed
Direct link
