Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 0 |
| Since 2007 (last 20 years) | 1 |
Descriptor
| Accuracy | 1 |
| Computational Linguistics | 1 |
| Computer Software | 1 |
| Essays | 1 |
| Feedback (Response) | 1 |
| Formative Evaluation | 1 |
| Natural Language Processing | 1 |
| Reliability | 1 |
| Scoring | 1 |
| Semantics | 1 |
| Verbs | 1 |
| More ▼ | |
Source
| Grantee Submission | 1 |
Author
| Crossley, Scott A. | 1 |
| Kyle, Kristopher | 1 |
| McNamara, Danielle S. | 1 |
Publication Type
| Journal Articles | 1 |
| Reports - Research | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Crossley, Scott A.; Kyle, Kristopher; McNamara, Danielle S. – Grantee Submission, 2015
This study investigates the relative efficacy of using linguistic micro-features, the aggregation of such features, and a combination of micro-features and aggregated features in developing automatic essay scoring (AES) models. Although the use of aggregated features is widespread in AES systems (e.g., e-rater; Intellimetric), very little…
Descriptors: Essays, Scoring, Feedback (Response), Writing Evaluation

Peer reviewed
Direct link
