Publication Date
| In 2026 | 0 |
| Since 2025 | 3 |
| Since 2022 (last 5 years) | 7 |
Descriptor
Source
| Education and Information… | 1 |
| Innovations in Education and… | 1 |
| Language Learning & Technology | 1 |
| Language Teaching Research… | 1 |
| Language Testing | 1 |
| ProQuest LLC | 1 |
| Written Communication | 1 |
Author
| Agustín Garagorry Guerra | 1 |
| Alessandra Zappoli | 1 |
| Alessio Palmero Aprosio | 1 |
| Alexander F. Tang | 1 |
| Andrew Runge | 1 |
| Aryadoust, Vahid | 1 |
| Dan Song | 1 |
| Geoffrey T. LaFlair | 1 |
| Huawei, Shi | 1 |
| Jussi S. Jauhiainen | 1 |
| Kai-Ling Lo | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 6 |
| Reports - Research | 6 |
| Dissertations/Theses -… | 1 |
| Information Analyses | 1 |
| Tests/Questionnaires | 1 |
Education Level
Audience
Location
| Italy | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| International English… | 1 |
What Works Clearinghouse Rating
Huawei, Shi; Aryadoust, Vahid – Education and Information Technologies, 2023
Automated writing evaluation (AWE) systems are developed based on interdisciplinary research and technological advances such as natural language processing, computer sciences, and latent semantic analysis. Despite a steady increase in research publications in this area, the results of AWE investigations are often mixed, and their validity may be…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Automation
Jussi S. Jauhiainen; Agustín Garagorry Guerra – Innovations in Education and Teaching International, 2025
The study highlights ChatGPT-4's potential in educational settings for the evaluation of university students' open-ended written examination responses. ChatGPT-4 evaluated 54 written responses, ranging from 24 to 256 words in English. It assessed each response using five criteria and assigned a grade on a six-point scale from fail to excellent,…
Descriptors: Artificial Intelligence, Technology Uses in Education, Student Evaluation, Writing Evaluation
Andrew Runge; Sarah Goodwin; Yigal Attali; Mya Poe; Phoebe Mulcaire; Kai-Ling Lo; Geoffrey T. LaFlair – Language Testing, 2025
A longstanding criticism of traditional high-stakes writing assessments is their use of static prompts in which test takers compose a single text in response to a prompt. These static prompts do not allow measurement of the writing process. This paper describes the development and validation of an innovative interactive writing task. After the…
Descriptors: Material Development, Writing Evaluation, Writing Assignments, Writing Skills
Dan Song; Alexander F. Tang – Language Learning & Technology, 2025
While many studies have addressed the benefits of technology-assisted L2 writing, limited research has delved into how generative artificial intelligence (GAI) supports students in completing their writing tasks in Mandarin Chinese. In this study, 26 university-level Mandarin Chinese foreign language students completed two writing tasks on two…
Descriptors: Artificial Intelligence, Second Language Learning, Standardized Tests, Writing Tests
Osama Koraishi – Language Teaching Research Quarterly, 2024
This study conducts a comprehensive quantitative evaluation of OpenAI's language model, ChatGPT 4, for grading Task 2 writing of the IELTS exam. The objective is to assess the alignment between ChatGPT's grading and that of official human raters. The analysis encompassed a multifaceted approach, including a comparison of means and reliability…
Descriptors: Second Language Learning, English (Second Language), Language Tests, Artificial Intelligence
Alessandra Zappoli; Alessio Palmero Aprosio; Sara Tonelli – Written Communication, 2024
In this work, we explore the use of digital technologies and statistical analysis to monitor how Italian secondary school students' writing changes over time and how comparisons can be made across different high school types. We analyzed more than 2,000 exam essays written by Italian high school students over 13 years and in five different school…
Descriptors: Essays, Writing (Composition), Foreign Countries, High School Students
Yi Gui – ProQuest LLC, 2024
This study explores using transfer learning in machine learning for natural language processing (NLP) to create generic automated essay scoring (AES) models, providing instant online scoring for statewide writing assessments in K-12 education. The goal is to develop an instant online scorer that is generalizable to any prompt, addressing the…
Descriptors: Writing Tests, Natural Language Processing, Writing Evaluation, Scoring

Peer reviewed
Direct link
