NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing 1 to 15 of 28 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Hosnia M. M. Ahmed; Shaymaa E. Sorour – Education and Information Technologies, 2024
Evaluating the quality of university exam papers is crucial for universities seeking institutional and program accreditation. Currently, exam papers are assessed manually, a process that can be tedious, lengthy, and in some cases, inconsistent. This is often due to the focus on assessing only the formal specifications of exam papers. This study…
Descriptors: Higher Education, Artificial Intelligence, Writing Evaluation, Natural Language Processing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Victor-Alexandru Padurean; Tung Phung; Nachiket Kotalwar; Michael Liut; Juho Leinonen; Paul Denny; Adish Singla – International Educational Data Mining Society, 2025
The growing need for automated and personalized feedback in programming education has led to recent interest in leveraging generative AI for feedback generation. However, current approaches tend to rely on prompt engineering techniques in which predefined prompts guide the AI to generate feedback. This can result in rigid and constrained responses…
Descriptors: Automation, Student Writing Models, Feedback (Response), Programming
Peer reviewed Peer reviewed
Direct linkDirect link
Mickie De Wet; Margarita Oja Da Silva; René Bohnsack – Innovations in Education and Teaching International, 2025
This study explores the use of large language models (LLMs) to generate feedback on essay-type assignments in Higher Education. Drawing on a seminal feedback framework, it examines the pedagogical and psychological effectiveness of LLM-generated feedback across three cohorts of MBA, MSc, and undergraduate students. Methods included linguistic…
Descriptors: Higher Education, College Students, Artificial Intelligence, Writing Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Huawei, Shi; Aryadoust, Vahid – Education and Information Technologies, 2023
Automated writing evaluation (AWE) systems are developed based on interdisciplinary research and technological advances such as natural language processing, computer sciences, and latent semantic analysis. Despite a steady increase in research publications in this area, the results of AWE investigations are often mixed, and their validity may be…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Automation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Anson, Chris M. – Composition Studies, 2022
Student plagiarism has challenged educators for decades, with heightened paranoia following the advent of the Internet in the 1980's and ready access to easily copied text. But plagiarism will look like child's play next to new developments in AI-based natural-language processing (NLP) systems that increasingly appear to "write" as…
Descriptors: Plagiarism, Artificial Intelligence, Natural Language Processing, Writing Assignments
Peer reviewed Peer reviewed
Direct linkDirect link
Wesley Morris; Scott Crossley; Langdon Holmes; Chaohua Ou; Mihai Dascalu; Danielle McNamara – International Journal of Artificial Intelligence in Education, 2025
As intelligent textbooks become more ubiquitous in classrooms and educational settings, the need to make them more interactive arises. An alternative is to ask students to generate knowledge in response to textbook content and provide feedback about the produced knowledge. This study develops Natural Language Processing models to automatically…
Descriptors: Formative Evaluation, Feedback (Response), Textbooks, Artificial Intelligence
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Paul Deane; Duanli Yan; Katherine Castellano; Yigal Attali; Michelle Lamar; Mo Zhang; Ian Blood; James V. Bruno; Chen Li; Wenju Cui; Chunyi Ruan; Colleen Appel; Kofi James; Rodolfo Long; Farah Qureshi – ETS Research Report Series, 2024
This paper presents a multidimensional model of variation in writing quality, register, and genre in student essays, trained and tested via confirmatory factor analysis of 1.37 million essay submissions to ETS' digital writing service, Criterion®. The model was also validated with several other corpora, which indicated that it provides a…
Descriptors: Writing (Composition), Essays, Models, Elementary School Students
Peer reviewed Peer reviewed
PDF on ERIC Download full text
David W. Brown; Dean Jensen – International Society for Technology, Education, and Science, 2023
The growth of Artificial Intelligence (AI) chatbots has created a great deal of discussion in the education community. While many have gravitated towards the ability of these bots to make learning more interactive, others have grave concerns that student created essays, long used as a means of assessing the subject comprehension of students, may…
Descriptors: Artificial Intelligence, Natural Language Processing, Computer Software, Writing (Composition)
McCaffrey, Daniel F.; Zhang, Mo; Burstein, Jill – Grantee Submission, 2022
Background: This exploratory writing analytics study uses argumentative writing samples from two performance contexts--standardized writing assessments and university English course writing assignments--to compare: (1) linguistic features in argumentative writing; and (2) relationships between linguistic characteristics and academic performance…
Descriptors: Persuasive Discourse, Academic Language, Writing (Composition), Academic Achievement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wan, Qian; Crossley, Scott; Allen, Laura; McNamara, Danielle – Grantee Submission, 2020
In this paper, we extracted content-based and structure-based features of text to predict human annotations for claims and nonclaims in argumentative essays. We compared Logistic Regression, Bernoulli Naive Bayes, Gaussian Naive Bayes, Linear Support Vector Classification, Random Forest, and Neural Networks to train classification models. Random…
Descriptors: Persuasive Discourse, Essays, Writing Evaluation, Natural Language Processing
Crossley, Scott A.; Kim, Minkyung; Allen, Laura K.; McNamara, Danielle S. – Grantee Submission, 2019
Summarization is an effective strategy to promote and enhance learning and deep comprehension of texts. However, summarization is seldom implemented by teachers in classrooms because the manual evaluation of students' summaries requires time and effort. This problem has led to the development of automated models of summarization quality. However,…
Descriptors: Automation, Writing Evaluation, Natural Language Processing, Artificial Intelligence
Hong Jiao, Editor; Robert W. Lissitz, Editor – IAP - Information Age Publishing, Inc., 2024
With the exponential increase of digital assessment, different types of data in addition to item responses become available in the measurement process. One of the salient features in digital assessment is that process data can be easily collected. This non-conventional structured or unstructured data source may bring new perspectives to better…
Descriptors: Artificial Intelligence, Natural Language Processing, Psychometrics, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Dandan; Hebert, Michael; Wilson, Joshua – American Educational Research Journal, 2022
We used multivariate generalizability theory to examine the reliability of hand-scoring and automated essay scoring (AES) and to identify how these scoring methods could be used in conjunction to optimize writing assessment. Students (n = 113) included subsamples of struggling writers and non-struggling writers in Grades 3-5 drawn from a larger…
Descriptors: Reliability, Scoring, Essays, Automation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lynette Hazelton; Jessica Nastal; Norbert Elliot; Jill Burstein; Daniel F. McCaffrey – Journal of Response to Writing, 2021
In writing studies research, automated writing evaluation technology is typically examined for a specific, often narrow purpose: to evaluate a particular writing improvement measure, to mine data for changes in writing performance, or to demonstrate the effectiveness of a single technology and accompanying validity arguments. This article adopts a…
Descriptors: Formative Evaluation, Writing Evaluation, Automation, Natural Language Processing
Mozer, Reagan; Miratrixy, Luke; Relyea, Jackie Eunjung; Kim, James S. – Annenberg Institute for School Reform at Brown University, 2021
In a randomized trial that collects text as an outcome, traditional approaches for assessing treatment impact require that each document first be manually coded for constructs of interest by human raters. An impact analysis can then be conducted to compare treatment and control groups, using the hand-coded scores as a measured outcome. This…
Descriptors: Scoring, Automation, Data Analysis, Natural Language Processing
Previous Page | Next Page »
Pages: 1  |  2