NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Type
Reports - Evaluative18
Journal Articles16
Information Analyses1
Speeches/Meeting Papers1
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 18 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Doewes, Afrizal; Kurdhi, Nughthoh Arfawi; Saxena, Akrati – International Educational Data Mining Society, 2023
Automated Essay Scoring (AES) tools aim to improve the efficiency and consistency of essay scoring by using machine learning algorithms. In the existing research work on this topic, most researchers agree that human-automated score agreement remains the benchmark for assessing the accuracy of machine-generated scores. To measure the performance of…
Descriptors: Essays, Writing Evaluation, Evaluators, Accuracy
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tahereh Firoozi; Okan Bulut; Mark J. Gierl – International Journal of Assessment Tools in Education, 2023
The proliferation of large language models represents a paradigm shift in the landscape of automated essay scoring (AES) systems, fundamentally elevating their accuracy and efficacy. This study presents an extensive examination of large language models, with a particular emphasis on the transformative influence of transformer-based models, such as…
Descriptors: Turkish, Writing Evaluation, Essays, Accuracy
UK Department for Education, 2024
This report sets out the findings of the technical development work completed as part of the Use Cases for Generative AI in Education project, commissioned by the Department for Education (DfE) in September 2023. It has been published alongside the User Research Report, which sets out the findings from the ongoing user engagement activity…
Descriptors: Artificial Intelligence, Technology Uses in Education, Computer Software, Computational Linguistics
Peer reviewed Peer reviewed
Direct linkDirect link
Nehm, Ross H.; Haertig, Hendrik – Journal of Science Education and Technology, 2012
Our study examines the efficacy of Computer Assisted Scoring (CAS) of open-response text relative to expert human scoring within the complex domain of evolutionary biology. Specifically, we explored whether CAS can diagnose the explanatory elements (or Key Concepts) that comprise undergraduate students' explanatory models of natural selection with…
Descriptors: Evolution, Undergraduate Students, Interrater Reliability, Computers
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Yagi, Sane M.; Al-Salman, Saleh – Studies in Second Language Learning and Teaching, 2011
Writing is a complex skill that is hard to teach. Although the written product is what is often evaluated in the context of language teaching, the process of giving thought to linguistic form is fascinating. For almost forty years, language teachers have found it more effective to help learners in the writing process than in the written product;…
Descriptors: Writing Instruction, Teaching Methods, Computer Software, Educational Technology
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sabapathy, Elangkeeran A/L; Rahim, Rozlan Abd; Jusoff, Kamaruzaman – English Language Teaching, 2009
The purpose of this article is to examine the extent to which "plagiarismdetect.com," an internet help/tool to detect plagiarism helps academicians tackle the ever-growing problem of plagiarism. Concerned with term papers, essays and most of the time with full-blown research reports, a tool like "plagiarismdetect.com" may…
Descriptors: Plagiarism, Computer Software, Essays, Research Papers (Students)
Peer reviewed Peer reviewed
Direct linkDirect link
McNamara, Danielle S.; Crossley, Scott A.; McCarthy, Philip M. – Written Communication, 2010
In this study, a corpus of expert-graded essays, based on a standardized scoring rubric, is computationally evaluated so as to distinguish the differences between those essays that were rated as high and those rated as low. The automated tool, Coh-Metrix, is used to examine the degree to which high- and low-proficiency essays can be predicted by…
Descriptors: Essays, Undergraduate Students, Educational Quality, Computational Linguistics
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Yong-Won; Gentile, Claudia; Kantor, Robert – Applied Linguistics, 2010
The main purpose of the study was to investigate the distinctness and reliability of analytic (or multi-trait) rating dimensions and their relationships to holistic scores and "e-rater"[R] essay feature variables in the context of the TOEFL[R] computer-based test (TOEFL CBT) writing assessment. Data analyzed in the study were holistic…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Essays
Preston, Michael D. – Educational Technology, 2010
An inquiry-based approach to watching videos of children engaged in learning, supported by tools that allow for frequent and close viewing, provides an opportunity for prospective teachers to develop their skills of observation and interpretation before entering the classroom. The in-depth study of videos creates a context in which teachers can…
Descriptors: Preservice Teacher Education, Inquiry, Learning Strategies, Essays
Peer reviewed Peer reviewed
Direct linkDirect link
Chodorow, Martin; Gamon, Michael; Tetreault, Joel – Language Testing, 2010
In this paper, we describe and evaluate two state-of-the-art systems for identifying and correcting writing errors involving English articles and prepositions. Criterion[superscript SM], developed by Educational Testing Service, and "ESL Assistant", developed by Microsoft Research, both use machine learning techniques to build models of article…
Descriptors: Grammar, Feedback (Response), Form Classes (Languages), Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Burrows, Steven; Shortis, Mark – Australasian Journal of Educational Technology, 2011
Online marking and feedback systems are critical for providing timely and accurate feedback to students and maintaining the integrity of results in large class teaching. Previous investigations have involved much in-house development and more consideration is needed for deploying or customising off the shelf solutions. Furthermore, keeping up to…
Descriptors: Foreign Countries, Integrated Learning Systems, Feedback (Response), Evaluation Criteria
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Grimes, Douglas; Warschauer, Mark – Journal of Technology, Learning, and Assessment, 2010
Automated writing evaluation (AWE) software uses artificial intelligence (AI) to score student essays and support revision. We studied how an AWE program called MY Access![R] was used in eight middle schools in Southern California over a three-year period. Although many teachers and students considered automated scoring unreliable, and teachers'…
Descriptors: Automation, Writing Evaluation, Essays, Artificial Intelligence
Peer reviewed Peer reviewed
Direct linkDirect link
Clariana, Roy B.; Wallace, Patricia E.; Godshalk, Veronica M. – Educational Technology Research and Development, 2009
Essays are an important measure of complex learning, but pronouns can confound an author's intended meaning for both readers and text analysis software. This descriptive investigation considers the effect of pronouns on a computer-based text analysis approach, "ALA-Reader," which uses students' essays as the data source for deriving individual and…
Descriptors: Sentences, Cognitive Structures, Essays, Content Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kim, Seong-in; Hameed, Ibrahim A. – Art Therapy: Journal of the American Art Therapy Association, 2009
For mental health professionals, art assessment is a useful tool for patient evaluation and diagnosis. Consideration of various color-related elements is important in art assessment. This correlational study introduces the concept of variety of color as a new color-related element of an artwork. This term represents a comprehensive use of color,…
Descriptors: Mental Health Workers, Essays, Scoring, Visual Stimuli
Peer reviewed Peer reviewed
Direct linkDirect link
Lai, Yi-hsiu – British Journal of Educational Technology, 2010
The purpose of this study was to investigate problems and potentials of new technologies in English writing education. The effectiveness of automated writing evaluation (AWE) ("MY Access") and of peer evaluation (PE) was compared. Twenty-two English as a foreign language (EFL) learners in Taiwan participated in this study. They submitted…
Descriptors: Feedback (Response), Writing Evaluation, Peer Evaluation, Grading
Previous Page | Next Page ยป
Pages: 1  |  2