Publication Date
| In 2026 | 0 |
| Since 2025 | 5 |
| Since 2022 (last 5 years) | 9 |
| Since 2017 (last 10 years) | 12 |
| Since 2007 (last 20 years) | 16 |
Descriptor
Source
Author
| Abdalla, Mohamed | 1 |
| Allen, Laura | 1 |
| Andrew Katz | 1 |
| Andrew Potter | 1 |
| Arzu Atasoy | 1 |
| Blanchard, Daniel | 1 |
| Cahill, Aoife | 1 |
| Cardoso, Walcir | 1 |
| Chodorow, Martin | 1 |
| Crossley, Scott | 1 |
| Enright, Mary K. | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 16 |
| Reports - Research | 15 |
| Tests/Questionnaires | 2 |
| Reports - Descriptive | 1 |
Education Level
| Higher Education | 7 |
| Postsecondary Education | 6 |
| Secondary Education | 2 |
| Grade 10 | 1 |
| High Schools | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
| International English… | 1 |
| Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Ngoc My Bui; Jessie S. Barrot – Education and Information Technologies, 2025
With the generative artificial intelligence (AI) tool's remarkable capabilities in understanding and generating meaningful content, intriguing questions have been raised about its potential as an automated essay scoring (AES) system. One such tool is ChatGPT, which is capable of scoring any written work based on predefined criteria. However,…
Descriptors: Artificial Intelligence, Natural Language Processing, Technology Uses in Education, Automation
Huiying Cai; Xun Yan – Language Testing, 2024
Rater comments tend to be qualitatively analyzed to indicate raters' application of rating scales. This study applied natural language processing (NLP) techniques to quantify meaningful, behavioral information from a corpus of rater comments and triangulated that information with a many-facet Rasch measurement (MFRM) analysis of rater scores. The…
Descriptors: Natural Language Processing, Item Response Theory, Rating Scales, Writing Evaluation
Arzu Atasoy; Saieed Moslemi Nezhad Arani – Education and Information Technologies, 2025
There is growing interest in the potential of Artificial Intelligence (AI) to assist in various educational tasks, including writing assessment. However, the comparative efficacy of human and AI-powered systems in this domain remains a subject of ongoing exploration. This study aimed to compare the accuracy of human raters (teachers and…
Descriptors: Writing (Composition), Writing Evaluation, Student Evaluation, Artificial Intelligence
Andrew Potter; Mitchell Shortt; Maria Goldshtein; Rod D. Roscoe – Grantee Submission, 2025
Broadly defined, academic language (AL) is a set of lexical-grammatical norms and registers commonly used in educational and academic discourse. Mastery of academic language in writing is an important aspect of writing instruction and assessment. The purpose of this study was to use Natural Language Processing (NLP) tools to examine the extent to…
Descriptors: Academic Language, Natural Language Processing, Grammar, Vocabulary Skills
Yishen Song; Qianta Zhu; Huaibo Wang; Qinhua Zheng – IEEE Transactions on Learning Technologies, 2024
Manually scoring and revising student essays has long been a time-consuming task for educators. With the rise of natural language processing techniques, automated essay scoring (AES) and automated essay revising (AER) have emerged to alleviate this burden. However, current AES and AER models require large amounts of training data and lack…
Descriptors: Scoring, Essays, Writing Evaluation, Computer Software
Sanosi, Abdulaziz; Abdalla, Mohamed – Australian Journal of Applied Linguistics, 2021
This study aimed to examine the potentials of the NLP approach in detecting discourse markers (DMs), namely okay, in transcribed spoken data. One hundred thirty-eight concordance lines were presented to human referees to judge the functions of okay in them as a DM or Non-DM. After that, the researchers used a Python script written according to the…
Descriptors: Natural Language Processing, Computational Linguistics, Programming Languages, Accuracy
Katherine Drinkwater Gregg; Olivia Ryan; Andrew Katz; Mark Huerta; Susan Sajadi – Journal of Engineering Education, 2025
Background: Courses in engineering often use peer evaluation to monitor teamwork behaviors and team dynamics. The qualitative peer comments written for peer evaluations hold potential as a valuable source of formative feedback for students, yet little is known about their content and quality. Purpose: This study uses a large language model (LLM)…
Descriptors: Artificial Intelligence, Technology Uses in Education, Engineering Education, Student Evaluation
Garman, Andrew N.; Erwin, Taylor S.; Garman, Tyler R.; Kim, Dae Hyun – Journal of Competency-Based Education, 2021
Background: Competency models provide useful frameworks for organizing learning and assessment programs, but their construction is both time intensive and subject to perceptual biases. Some aspects of model development may be particularly well-suited to automation, specifically natural language processing (NLP), which could also help make them…
Descriptors: Natural Language Processing, Automation, Guidelines, Leadership Effectiveness
Crossley, Scott; Wan, Qian; Allen, Laura; McNamara, Danielle – Reading and Writing: An Interdisciplinary Journal, 2023
Synthesis writing is widely taught across domains and serves as an important means of assessing writing ability, text comprehension, and content learning. Synthesis writing differs from other types of writing in terms of both cognitive and task demands because it requires writers to integrate information across source materials. However, little is…
Descriptors: Writing Skills, Cognitive Processes, Essays, Cues
Osama Koraishi – Language Teaching Research Quarterly, 2024
This study conducts a comprehensive quantitative evaluation of OpenAI's language model, ChatGPT 4, for grading Task 2 writing of the IELTS exam. The objective is to assess the alignment between ChatGPT's grading and that of official human raters. The analysis encompassed a multifaceted approach, including a comparison of means and reliability…
Descriptors: Second Language Learning, English (Second Language), Language Tests, Artificial Intelligence
Xiaoling Bai; Nur Rasyidah Mohd Nordin – Eurasian Journal of Applied Linguistics, 2025
A perfect writing skill has been deemed instrumental to achieving competence in EFL, yet it is considered one of the most impressive learning domains. This study investigates the impact of human-AI collaborative feedback on the writing proficiency of EFL students. It examines key teaching domains, including the teaching environment, teacher…
Descriptors: Artificial Intelligence, Feedback (Response), Evaluators, Writing Skills
Moussalli, Souheila; Cardoso, Walcir – Computer Assisted Language Learning, 2020
Second/foreign language (L2) classrooms do not always provide opportunities for input and output practice [Lightbown, P. M. (2000). Classroom SLA research and second language teaching. Applied Linguistics, 21(4), 431-462]. The use of smart speakers such as Amazon Echo and its associated voice-controlled intelligent personal assistant (IPA) Alexa…
Descriptors: Artificial Intelligence, Pronunciation, Native Language, Listening Comprehension
Jorge-Botana, Guillermo; Luzón, José M.; Gómez-Veiga, Isabel; Martín-Cordero, Jesús I. – Journal of Educational Computing Research, 2015
A latent semantic analysis-based automated summary assessment is described; this automated system is applied to a real learning from text task in a Distance Education context. We comment on the use of automated content, plagiarism, text coherence measures, and word weights average and their impact on predicting human judges summary scoring. A…
Descriptors: Foreign Countries, Distance Education, Regression (Statistics), Plagiarism
Forbes-Riley, Kate; Litman, Diane – International Journal of Artificial Intelligence in Education, 2013
In this paper we investigate how student disengagement relates to two performance metrics in a spoken dialog computer tutoring corpus, both when disengagement is measured through manual annotation by a trained human judge, and also when disengagement is measured through automatic annotation by the system based on a machine learning model. First,…
Descriptors: Correlation, Learner Engagement, Oral Language, Computer Assisted Instruction
Blanchard, Daniel; Tetreault, Joel; Higgins, Derrick; Cahill, Aoife; Chodorow, Martin – ETS Research Report Series, 2013
This report presents work on the development of a new corpus of non-native English writing. It will be useful for the task of native language identification, as well as grammatical error detection and correction, and automatic essay scoring. In this report, the corpus is described in detail.
Descriptors: Language Tests, Second Language Learning, English (Second Language), Writing Tests
Previous Page | Next Page »
Pages: 1 | 2
Peer reviewed
Direct link
