Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 0 |
| Since 2007 (last 20 years) | 2 |
Descriptor
Source
| Journal of Technology,… | 2 |
| Australian Educational… | 1 |
| College English | 1 |
| ERS Spectrum | 1 |
| Journal of Educational… | 1 |
Author
| Attali, Yigal | 1 |
| Burstein, Jill | 1 |
| Dikli, Semire | 1 |
| Egley, Robert J. | 1 |
| Harris, Deborah J. | 1 |
| Humphry, Stephen M. | 1 |
| Huot, Brian | 1 |
| Jones, Brett D. | 1 |
| Kolen, Michael J. | 1 |
| McGrane, Joshua A. | 1 |
| Moore, Cindy | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 6 |
| Reports - Descriptive | 6 |
| Reports - Evaluative | 1 |
Education Level
| Higher Education | 3 |
| Postsecondary Education | 3 |
| Elementary Education | 1 |
| Elementary Secondary Education | 1 |
| Middle Schools | 1 |
| Secondary Education | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
| Florida Comprehensive… | 1 |
| Work Keys (ACT) | 1 |
What Works Clearinghouse Rating
Humphry, Stephen M.; McGrane, Joshua A. – Australian Educational Researcher, 2015
This paper presents a method for equating writing assessments using pairwise comparisons which does not depend upon conventional common-person or common-item equating designs. Pairwise comparisons have been successfully applied in the assessment of open-ended tasks in English and other areas such as visual art and philosophy. In this paper,…
Descriptors: Writing Evaluation, Evaluation Methods, Comparative Analysis, Writing Tests
Huot, Brian; O'Neill, Peggy; Moore, Cindy – College English, 2010
Writing program administrators and other composition specialists need to know the history of writing assessment in order to create a rich and responsible culture of it today. In its first fifty years, the field of writing assessment followed educational measurement in general by focusing on issues of reliability, whereas in its next fifty years,…
Descriptors: Writing (Composition), Writing Evaluation, Writing Tests, Validity
Peer reviewedWang, Tianyou; Kolen, Michael J.; Harris, Deborah J. – Journal of Educational Measurement, 2000
Describes procedures for calculating conditional standard error of measurement (CSEM) and reliability of scale scores and classification of consistency of performance levels. Applied these procedures to data from the American College Testing Program's Work Keys Writing Assessment with sample sizes of 7,097, 1,035, and 1,793. Results show that the…
Descriptors: Adults, Classification, Error of Measurement, Item Response Theory
Dikli, Semire – Journal of Technology, Learning, and Assessment, 2006
Automated Essay Scoring (AES) is defined as the computer technology that evaluates and scores the written prose (Shermis & Barrera, 2002; Shermis & Burstein, 2003; Shermis, Raymat, & Barrera, 2003). AES systems are mainly used to overcome time, cost, reliability, and generalizability issues in writing assessment (Bereiter, 2003; Burstein,…
Descriptors: Scoring, Writing Evaluation, Writing Tests, Standardized Tests
Attali, Yigal; Burstein, Jill – Journal of Technology, Learning, and Assessment, 2006
E-rater[R] has been used by the Educational Testing Service for automated essay scoring since 1999. This paper describes a new version of e-rater (V.2) that is different from other automated essay scoring systems in several important respects. The main innovations of e-rater V.2 are a small, intuitive, and meaningful set of features used for…
Descriptors: Educational Testing, Test Scoring Machines, Scoring, Writing Evaluation
Jones, Brett D.; Egley, Robert J. – ERS Spectrum, 2005
The purpose of this paper is to discuss Florida teachers' recommendations for improving the Florida Comprehensive Assessment Test (FCAT) and to compare their recommendations with those of Florida administrators. Although teachers' suggestions varied as to the types and extent of remedies needed to improve the FCAT, some common themes emerged. The…
Descriptors: Test Results, Core Curriculum, Student Evaluation, Accountability

Direct link
