Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 6 |
Descriptor
| Computer Assisted Testing | 6 |
| Test Format | 4 |
| Scoring | 3 |
| Testing | 3 |
| Alternative Assessment | 2 |
| Artificial Intelligence | 2 |
| Automation | 2 |
| Academic Achievement | 1 |
| Achievement Gains | 1 |
| Adaptive Testing | 1 |
| Assessment Literacy | 1 |
| More ▼ | |
Source
| ETS Research Report Series | 6 |
Author
| Amit Sevak | 1 |
| Belur, Vinetha | 1 |
| Blair Lehman | 1 |
| Carol Eckerly | 1 |
| Casabianca, Jodi M. | 1 |
| Daniel Fishtein | 1 |
| Ikkyu Choi | 1 |
| Jesse R. Sparks | 1 |
| Jesse Sparks | 1 |
| Jing Miao | 1 |
| Jonathan Steinberg | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 6 |
| Reports - Research | 4 |
| Reports - Descriptive | 1 |
| Reports - Evaluative | 1 |
Education Level
| Secondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
| National Assessment of… | 1 |
What Works Clearinghouse Rating
Jing Miao; Yi Cao; Michael E. Walker – ETS Research Report Series, 2024
Studies of test score comparability have been conducted at different stages in the history of testing to ensure that test results carry the same meaning regardless of test conditions. The expansion of at-home testing via remote proctoring sparked another round of interest. This study uses data from three licensure tests to assess potential mode…
Descriptors: Testing, Test Format, Computer Assisted Testing, Home Study
McCaffrey, Daniel F.; Casabianca, Jodi M.; Ricker-Pedley, Kathryn L.; Lawless, René R.; Wendler, Cathy – ETS Research Report Series, 2022
This document describes a set of best practices for developing, implementing, and maintaining the critical process of scoring constructed-response tasks. These practices address both the use of human raters and automated scoring systems as part of the scoring process and cover the scoring of written, spoken, performance, or multimodal responses.…
Descriptors: Best Practices, Scoring, Test Format, Computer Assisted Testing
Blair Lehman; Jesse R. Sparks; Jonathan Steinberg – ETS Research Report Series, 2024
Over the last 20 years, many methods have been proposed to use process data (e.g., response time) to detect changes in engagement during the test-taking process. However, many of these methods were developed and evaluated in highly similar testing contexts: 30 or more single-select multiple-choice items presented in a linear, fixed sequence in…
Descriptors: National Competency Tests, Secondary School Mathematics, Secondary School Students, Mathematics Tests
Carol Eckerly; Yue Jia; Paul Jewsbury – ETS Research Report Series, 2022
Testing programs have explored the use of technology-enhanced items alongside traditional item types (e.g., multiple-choice and constructed-response items) as measurement evidence of latent constructs modeled with item response theory (IRT). In this report, we discuss considerations in applying IRT models to a particular type of adaptive testlet…
Descriptors: Computer Assisted Testing, Test Items, Item Response Theory, Scoring
Olivera-Aguilar, Margarita; Lee, Hee-Sun; Pallant, Amy; Belur, Vinetha; Mulholland, Matthew; Liu, Ou Lydia – ETS Research Report Series, 2022
This study uses a computerized formative assessment system that provides automated scoring and feedback to help students write scientific arguments in a climate change curriculum. We compared the effect of contextualized versus generic automated feedback on students' explanations of scientific claims and attributions of uncertainty to those…
Descriptors: Computer Assisted Testing, Formative Evaluation, Automation, Scoring
Patrick Kyllonen; Amit Sevak; Teresa Ober; Ikkyu Choi; Jesse Sparks; Daniel Fishtein – ETS Research Report Series, 2024
Assessment refers to a broad array of approaches for measuring or evaluating a person's (or group of persons') skills, behaviors, dispositions, or other attributes. Assessments range from standardized tests used in admissions, employee selection, licensure examinations, and domestic and international large-scale assessments of cognitive and…
Descriptors: Assessment Literacy, Testing, Test Bias, Test Construction

Peer reviewed
