Publication Date
| In 2026 | 0 |
| Since 2025 | 1 |
| Since 2022 (last 5 years) | 5 |
| Since 2017 (last 10 years) | 7 |
| Since 2007 (last 20 years) | 7 |
Descriptor
| Automation | 7 |
| Elementary School Students | 7 |
| Natural Language Processing | 7 |
| Artificial Intelligence | 6 |
| Scoring | 6 |
| Grade 4 | 4 |
| Essays | 3 |
| Writing Evaluation | 3 |
| Computer Assisted Testing | 2 |
| Evaluation Methods | 2 |
| Grade 2 | 2 |
| More ▼ | |
Source
| American Educational Research… | 1 |
| Annenberg Institute for… | 1 |
| ETS Research Report Series | 1 |
| Grantee Submission | 1 |
| Journal of Educational… | 1 |
| Journal of Learning Analytics | 1 |
| Language Assessment Quarterly | 1 |
Author
| Araya, Roberto | 1 |
| Baker, Doris Luft | 1 |
| Chen Li | 1 |
| Chen, Dandan | 1 |
| Chenglu Li | 1 |
| Chunyi Ruan | 1 |
| Collazo, Marlen | 1 |
| Colleen Appel | 1 |
| Duanli Yan | 1 |
| E. E. Jang | 1 |
| Fan Zhang | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 7 |
| Journal Articles | 6 |
Education Level
| Elementary Education | 7 |
| Early Childhood Education | 4 |
| Grade 4 | 4 |
| Intermediate Grades | 4 |
| Primary Education | 4 |
| Grade 2 | 2 |
| Grade 3 | 2 |
| Grade 5 | 2 |
| Middle Schools | 2 |
| Grade 1 | 1 |
| Grade 6 | 1 |
| More ▼ | |
Audience
Location
| Canada | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Zifeng Liu; Wanli Xing; Chenglu Li; Fan Zhang; Hai Li; Victor Minces – Journal of Learning Analytics, 2025
Creativity is a vital skill in science, technology, engineering, and mathematics (STEM)-related education, fostering innovation and problem-solving. Traditionally, creativity assessments relied on human evaluations, such as the consensual assessment technique (CAT), which are resource-intensive, time-consuming, and often subjective. Recent…
Descriptors: Creativity, Elementary School Students, Artificial Intelligence, Man Machine Systems
Paul Deane; Duanli Yan; Katherine Castellano; Yigal Attali; Michelle Lamar; Mo Zhang; Ian Blood; James V. Bruno; Chen Li; Wenju Cui; Chunyi Ruan; Colleen Appel; Kofi James; Rodolfo Long; Farah Qureshi – ETS Research Report Series, 2024
This paper presents a multidimensional model of variation in writing quality, register, and genre in student essays, trained and tested via confirmatory factor analysis of 1.37 million essay submissions to ETS' digital writing service, Criterion®. The model was also validated with several other corpora, which indicated that it provides a…
Descriptors: Writing (Composition), Essays, Models, Elementary School Students
Urrutia, Felipe; Araya, Roberto – Journal of Educational Computing Research, 2024
Written answers to open-ended questions can have a higher long-term effect on learning than multiple-choice questions. However, it is critical that teachers immediately review the answers, and ask to redo those that are incoherent. This can be a difficult task and can be time-consuming for teachers. A possible solution is to automate the detection…
Descriptors: Elementary School Students, Grade 4, Elementary School Mathematics, Mathematics Tests
Chen, Dandan; Hebert, Michael; Wilson, Joshua – American Educational Research Journal, 2022
We used multivariate generalizability theory to examine the reliability of hand-scoring and automated essay scoring (AES) and to identify how these scoring methods could be used in conjunction to optimize writing assessment. Students (n = 113) included subsamples of struggling writers and non-struggling writers in Grades 3-5 drawn from a larger…
Descriptors: Reliability, Scoring, Essays, Automation
Mozer, Reagan; Miratrixy, Luke; Relyea, Jackie Eunjung; Kim, James S. – Annenberg Institute for School Reform at Brown University, 2021
In a randomized trial that collects text as an outcome, traditional approaches for assessing treatment impact require that each document first be manually coded for constructs of interest by human raters. An impact analysis can then be conducted to compare treatment and control groups, using the hand-coded scores as a measured outcome. This…
Descriptors: Scoring, Automation, Data Analysis, Natural Language Processing
L. Hannah; E. E. Jang; M. Shah; V. Gupta – Language Assessment Quarterly, 2023
Machines have a long-demonstrated ability to find statistical relationships between qualities of texts and surface-level linguistic indicators of writing. More recently, unlocked by artificial intelligence, the potential of using machines to identify content-related writing trait criteria has been uncovered. This development is significant,…
Descriptors: Validity, Automation, Scoring, Writing Assignments
Sano, Makoto; Baker, Doris Luft; Collazo, Marlen; Le, Nancy; Kamata, Akihito – Grantee Submission, 2020
Purpose: Explore how different automated scoring (AS) models score reliably the expressive language and vocabulary knowledge in depth of young second grade Latino English learners. Design/methodology/approach: Analyze a total of 13,471 English utterances from 217 Latino English learners with random forest, end-to-end memory networks, long…
Descriptors: English Language Learners, Hispanic American Students, Elementary School Students, Grade 2

Peer reviewed
Direct link
