Publication Date
| In 2026 | 0 |
| Since 2025 | 2 |
| Since 2022 (last 5 years) | 5 |
| Since 2017 (last 10 years) | 12 |
| Since 2007 (last 20 years) | 17 |
Descriptor
| Computer Software | 22 |
| Grading | 22 |
| Writing Evaluation | 22 |
| Essays | 11 |
| Feedback (Response) | 9 |
| Computer Assisted Testing | 8 |
| Writing Instruction | 8 |
| Evaluation Methods | 7 |
| Comparative Analysis | 6 |
| Writing Assignments | 6 |
| Artificial Intelligence | 5 |
| More ▼ | |
Source
Author
| Aggarwal, Varun | 1 |
| Aitken, Adam | 1 |
| Ali Momen | 1 |
| Aysegül Liman-Kaban | 1 |
| Barthel, Abigail L. | 1 |
| Borchardt, Donald A. | 1 |
| Carroll, Rebecca | 1 |
| Cengiz Zopluoglu | 1 |
| Chad C. Tossell | 1 |
| Chelsea M. Sims | 1 |
| Chelsea R. Frazier | 1 |
| More ▼ | |
Publication Type
Education Level
| Higher Education | 8 |
| Postsecondary Education | 6 |
| Secondary Education | 4 |
| Elementary Education | 1 |
| Elementary Secondary Education | 1 |
| Grade 6 | 1 |
| High Schools | 1 |
| Intermediate Grades | 1 |
| Junior High Schools | 1 |
| Middle Schools | 1 |
Audience
| Researchers | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| International English… | 1 |
| National Assessment of… | 1 |
| Program for International… | 1 |
What Works Clearinghouse Rating
Elizabeth L. Wetzler; Kenneth S. Cassidy; Margaret J. Jones; Chelsea R. Frazier; Nickalous A. Korbut; Chelsea M. Sims; Shari S. Bowen; Michael Wood – Teaching of Psychology, 2025
Background: Generative artificial intelligence (AI) represents a potentially powerful, time-saving tool for grading student essays. However, little is known about how AI-generated essay scores compare to human instructor scores. Objective: The purpose of this study was to compare the essay grading scores produced by AI with those of human…
Descriptors: Essays, Writing Evaluation, Scores, Evaluators
Zhang, Haoran; Litman, Diane – Grantee Submission, 2021
Human essay grading is a laborious task that can consume much time and effort. Automated Essay Scoring (AES) has thus been proposed as a fast and effective solution to the problem of grading student writing at scale. However, because AES typically uses supervised machine learning, a human-graded essay corpus is still required to train the AES…
Descriptors: Essays, Grading, Writing Evaluation, Computational Linguistics
Chad C. Tossell; Nathan L. Tenhundfeld; Ali Momen; Katrina Cooley; Ewart J. de Visser – IEEE Transactions on Learning Technologies, 2024
This article examined student experiences before and after an essay writing assignment that required the use of ChatGPT within an undergraduate engineering course. Utilizing a pre-post study design, we gathered data from 24 participants to evaluate ChatGPT's support for both completing and grading an essay assignment, exploring its educational…
Descriptors: Student Attitudes, Computer Software, Artificial Intelligence, Grading
Uto, Masaki; Okano, Masashi – IEEE Transactions on Learning Technologies, 2021
In automated essay scoring (AES), scores are automatically assigned to essays as an alternative to grading by humans. Traditional AES typically relies on handcrafted features, whereas recent studies have proposed AES models based on deep neural networks to obviate the need for feature engineering. Those AES models generally require training on a…
Descriptors: Essays, Scoring, Writing Evaluation, Item Response Theory
Cengiz Zopluoglu; Gerald Tindal – Behavioral Research and Teaching, 2023
WriteRightNow (https://writerightnow.com) is an innovative digital platform meticulously crafted to enhance writing instruction across various curricula. Central to its design is the customization of instructional content, allowing for a multi-faceted approach that caters to diverse student needs, including those with special educational…
Descriptors: Automation, Grading, Educational Technology, Technology Uses in Education
Osama Koraishi – Language Teaching Research Quarterly, 2024
This study conducts a comprehensive quantitative evaluation of OpenAI's language model, ChatGPT 4, for grading Task 2 writing of the IELTS exam. The objective is to assess the alignment between ChatGPT's grading and that of official human raters. The analysis encompassed a multifaceted approach, including a comparison of means and reliability…
Descriptors: Second Language Learning, English (Second Language), Language Tests, Artificial Intelligence
Seval Kemal; Aysegül Liman-Kaban – Asian Journal of Distance Education, 2025
This study conducts a comprehensive analysis of the assessment of journal writing in English as a Foreign Language (EFL) at the secondary school level, comparing the performance of a Generative Artificial Intelligence (GenAI) platform with two human graders. Employing a convergent parallel mixed methods design, quantitative data were collected…
Descriptors: Artificial Intelligence, Secondary School Students, Feedback (Response), Writing Assignments
Hattie, John; Crivelli, Jill; Van Gompel, Kristin; West-Smith, Patricia; Wike, Kathryn – Online Submission, 2021
Feedback is powerful but variable. This study investigates which forms of feedback are more predictive of improvement to students' essays, using "Turnitin Feedback Studio"--a computer augmented system to capture teacher and computer-generated feedback comments. The study used a sample of 3,204 high school and university students who…
Descriptors: Feedback (Response), Writing Evaluation, High School Students, Undergraduate Students
Johnson, William F.; Stellmack, Mark A.; Barthel, Abigail L. – Teaching of Psychology, 2019
Electronic feedback given via word-processing software (e.g., track changes in Microsoft Word) allows for a simple way to provide feedback to students during the drafting process. Research has mostly focused on student attitudes toward electronic feedback, with little investigation of how feedback format might affect the quality of instructor…
Descriptors: Feedback (Response), Writing Evaluation, Writing Assignments, Educational Technology
Aitken, Adam; Thompson, Darrall G. – International Journal of Technology and Design Education, 2018
First year undergraduate design students have found difficulties in realising the standards expected for academic writing at university level. An assessment initiative was used to engage students with criteria and standards for a core interdisciplinary design subject notable for its demanding assessment of academic writing. The same graduate…
Descriptors: Undergraduate Students, Design, Assignments, Computer Software
Unnam, Abhishek; Takhar, Rohit; Aggarwal, Varun – International Educational Data Mining Society, 2019
Email has become the most preferred form of business communication. Writing "good" email has become an essential skill required in the industry. "Good" email writing not only facilitates clear communication, but also makes a positive impression on the recipient, whether it be one's colleague or a customer. The aim of this paper…
Descriptors: Grading, Electronic Mail, Feedback (Response), Written Language
Seifried, Eva; Lenhard, Wolfgang; Spinath, Birgit – Journal of Educational Computing Research, 2017
Writing essays and receiving feedback can be useful for fostering students' learning and motivation. When faced with large class sizes, it is desirable to identify students who might particularly benefit from feedback. In this article, we tested the potential of Latent Semantic Analysis (LSA) for identifying poor essays. A total of 14 teaching…
Descriptors: Computer Assisted Testing, Computer Software, Essays, Writing Evaluation
Krishnan, Rathi – NADE Digest, 2016
This paper is based on a presentation made at NADE 2016, in Anaheim, California, entitled "Turnitin--An Extraordinary Teaching and Feedback Tool in the Writing Classroom" which discussed the value and benefits of using Turnitin (TII), a subscription-based software/website available to universities that serves as an audio-visual feedback…
Descriptors: Plagiarism, Writing Evaluation, Feedback (Response), Computer Software
Eckhouse, Barry; Carroll, Rebecca – Business Communication Quarterly, 2013
Although relatively little attention has been given to the voice assessment of student work, at least when compared with more traditional forms of text-based review, the attention it has received strongly points to a promising form of review that has been hampered by the limits of an emerging technology. A fresh review of voice assessment in light…
Descriptors: Undergraduate Students, Graduate Students, Business Administration Education, Student Surveys
Lai, Yi-hsiu – British Journal of Educational Technology, 2010
The purpose of this study was to investigate problems and potentials of new technologies in English writing education. The effectiveness of automated writing evaluation (AWE) ("MY Access") and of peer evaluation (PE) was compared. Twenty-two English as a foreign language (EFL) learners in Taiwan participated in this study. They submitted…
Descriptors: Feedback (Response), Writing Evaluation, Peer Evaluation, Grading
Previous Page | Next Page »
Pages: 1 | 2
Peer reviewed
Direct link
