Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 0 |
| Since 2007 (last 20 years) | 1 |
Descriptor
| Computer Assisted Testing | 2 |
| Computer Software | 2 |
| Interrater Reliability | 2 |
| Scoring | 2 |
| Writing Evaluation | 2 |
| Computer Software Evaluation | 1 |
| Essay Tests | 1 |
| Essays | 1 |
| Evaluation Methods | 1 |
| Grading | 1 |
| Judges | 1 |
| More ▼ | |
Author
| McCurry, Doug | 1 |
| Page, Ellis Batten | 1 |
Publication Type
| Journal Articles | 2 |
| Reports - Evaluative | 2 |
Education Level
| Elementary Secondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
| National Assessment of… | 2 |
What Works Clearinghouse Rating
McCurry, Doug – Assessing Writing, 2010
This article considers the claim that machine scoring of writing test responses agrees with human readers as much as humans agree with other humans. These claims about the reliability of machine scoring of writing are usually based on specific and constrained writing tasks, and there is reason for asking whether machine scoring of writing requires…
Descriptors: Writing Tests, Scoring, Interrater Reliability, Computer Assisted Testing
Peer reviewedPage, Ellis Batten – Journal of Experimental Education, 1994
National Assessment of Educational Progress writing sample essays from 1988 and 1990 (495 and 599 essays) were subjected to computerized grading and human ratings. Cross-validation suggests that computer scoring is superior to a two-judge panel, a finding encouraging for large programs of essay evaluation. (SLD)
Descriptors: Computer Assisted Testing, Computer Software, Essays, Evaluation Methods

Direct link
