Publication Date
| In 2026 | 0 |
| Since 2025 | 4 |
| Since 2022 (last 5 years) | 32 |
| Since 2017 (last 10 years) | 45 |
| Since 2007 (last 20 years) | 46 |
Descriptor
| Artificial Intelligence | 46 |
| Computer Assisted Testing | 46 |
| Scoring | 46 |
| Automation | 19 |
| Computer Software | 14 |
| Natural Language Processing | 14 |
| Accuracy | 12 |
| Essays | 12 |
| Feedback (Response) | 10 |
| Foreign Countries | 9 |
| Computational Linguistics | 7 |
| More ▼ | |
Source
Author
| Danielle S. McNamara | 2 |
| Evanini, Keelan | 2 |
| Ionut Paraschiv | 2 |
| Mihai Dascalu | 2 |
| Shi, Lehong | 2 |
| Stefan Ruseti | 2 |
| Zhai, Xiaoming | 2 |
| Alex J. Mechaber | 1 |
| Allen, Laura K. | 1 |
| Amanda Huee-Ping Wong | 1 |
| Andrew B. Wolf | 1 |
| More ▼ | |
Publication Type
Education Level
Audience
Location
| China | 2 |
| Canada | 1 |
| Europe | 1 |
| Florida | 1 |
| Indonesia | 1 |
| Singapore | 1 |
| Turkey | 1 |
| United Kingdom | 1 |
| United Kingdom (England) | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| Test of English as a Foreign… | 2 |
| National Assessment of… | 1 |
| Torrance Tests of Creative… | 1 |
What Works Clearinghouse Rating
Peter Baldwin; Victoria Yaneva; Kai North; Le An Ha; Yiyun Zhou; Alex J. Mechaber; Brian E. Clauser – Journal of Educational Measurement, 2025
Recent developments in the use of large-language models have led to substantial improvements in the accuracy of content-based automated scoring of free-text responses. The reported accuracy levels suggest that automated systems could have widespread applicability in assessment. However, before they are used in operational testing, other aspects of…
Descriptors: Artificial Intelligence, Scoring, Computational Linguistics, Accuracy
Wesley Morris; Langdon Holmes; Joon Suh Choi; Scott Crossley – International Journal of Artificial Intelligence in Education, 2025
Recent developments in the field of artificial intelligence allow for improved performance in the automated assessment of extended response items in mathematics, potentially allowing for the scoring of these items cheaply and at scale. This study details the grand prize-winning approach to developing large language models (LLMs) to automatically…
Descriptors: Automation, Computer Assisted Testing, Mathematics Tests, Scoring
Jonas Flodén – British Educational Research Journal, 2025
This study compares how the generative AI (GenAI) large language model (LLM) ChatGPT performs in grading university exams compared to human teachers. Aspects investigated include consistency, large discrepancies and length of answer. Implications for higher education, including the role of teachers and ethics, are also discussed. Three…
Descriptors: College Faculty, Artificial Intelligence, Comparative Testing, Scoring
William Orwig; Emma R. Edenbaum; Joshua D. Greene; Daniel L. Schacter – Journal of Creative Behavior, 2024
Recent developments in computerized scoring via semantic distance have provided automated assessments of verbal creativity. Here, we extend past work, applying computational linguistic approaches to characterize salient features of creative text. We hypothesize that, in addition to semantic diversity, the degree to which a story includes…
Descriptors: Computer Assisted Testing, Scoring, Creativity, Computational Linguistics
Stefan Ruseti; Ionut Paraschiv; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Automated Essay Scoring (AES) is a well-studied problem in Natural Language Processing applied in education. Solutions vary from handcrafted linguistic features to large Transformer-based models, implying a significant effort in feature extraction and model implementation. We introduce a novel Automated Machine Learning (AutoML) pipeline…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essays
Stefan Ruseti; Ionut Paraschiv; Mihai Dascalu; Danielle S. McNamara – International Journal of Artificial Intelligence in Education, 2024
Automated Essay Scoring (AES) is a well-studied problem in Natural Language Processing applied in education. Solutions vary from handcrafted linguistic features to large Transformer-based models, implying a significant effort in feature extraction and model implementation. We introduce a novel Automated Machine Learning (AutoML) pipeline…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essays
Buczak, Philip; Huang, He; Forthmann, Boris; Doebler, Philipp – Journal of Creative Behavior, 2023
Traditionally, researchers employ human raters for scoring responses to creative thinking tasks. Apart from the associated costs this approach entails two potential risks. First, human raters can be subjective in their scoring behavior (inter-rater-variance). Second, individual raters are prone to inconsistent scoring patterns…
Descriptors: Computer Assisted Testing, Scoring, Automation, Creative Thinking
Shin, Jinnie; Gierl, Mark J. – Journal of Applied Testing Technology, 2022
Automated Essay Scoring (AES) technologies provide innovative solutions to score the written essays with a much shorter time span and at a fraction of the current cost. Traditionally, AES emphasized the importance of capturing the "coherence" of writing because abundant evidence indicated the connection between coherence and the overall…
Descriptors: Computer Assisted Testing, Scoring, Essays, Automation
Rebecka Weegar; Peter Idestam-Almquist – International Journal of Artificial Intelligence in Education, 2024
Machine learning methods can be used to reduce the manual workload in exam grading, making it possible for teachers to spend more time on other tasks. However, when it comes to grading exams, fully eliminating manual work is not yet possible even with very accurate automated grading, as any grading mistakes could have significant consequences for…
Descriptors: Grading, Computer Assisted Testing, Introductory Courses, Computer Science Education
McCaffrey, Daniel F.; Casabianca, Jodi M.; Ricker-Pedley, Kathryn L.; Lawless, René R.; Wendler, Cathy – ETS Research Report Series, 2022
This document describes a set of best practices for developing, implementing, and maintaining the critical process of scoring constructed-response tasks. These practices address both the use of human raters and automated scoring systems as part of the scoring process and cover the scoring of written, spoken, performance, or multimodal responses.…
Descriptors: Best Practices, Scoring, Test Format, Computer Assisted Testing
Firoozi, Tahereh; Bulut, Okan; Epp, Carrie Demmans; Naeimabadi, Ali; Barbosa, Denilson – Journal of Applied Testing Technology, 2022
Automated Essay Scoring (AES) using neural networks has helped increase the accuracy and efficiency of scoring students' written tasks. Generally, the improved accuracy of neural network approaches has been attributed to the use of modern word embedding techniques. However, which word embedding techniques produce higher accuracy in AES systems…
Descriptors: Computer Assisted Testing, Scoring, Essays, Artificial Intelligence
Ormerod, Christopher; Lottridge, Susan; Harris, Amy E.; Patel, Milan; van Wamelen, Paul; Kodeswaran, Balaji; Woolf, Sharon; Young, Mackenzie – International Journal of Artificial Intelligence in Education, 2023
We introduce a short answer scoring engine made up of an ensemble of deep neural networks and a Latent Semantic Analysis-based model to score short constructed responses for a large suite of questions from a national assessment program. We evaluate the performance of the engine and show that the engine achieves above-human-level performance on a…
Descriptors: Computer Assisted Testing, Scoring, Artificial Intelligence, Semantics
Tri Sedya Febrianti; Siti Fatimah; Yuni Fitriyah; Hanifah Nurhayati – International Journal of Education in Mathematics, Science and Technology, 2024
Assessing students' understanding of circle-related material through subjective tests is effective, though grading these tests can be challenging and often requires technological support. ChatGPT has shown promise in providing reliable and objective evaluations. Many teachers in Indonesia, however, continue to face difficulties integrating…
Descriptors: Artificial Intelligence, Computer Assisted Testing, Scoring, Tests
Cathy Cavanaugh; Bryn Humphrey; Paige Pullen – International Journal on E-Learning, 2024
To address needs in one US state to provide a professional development micro-credential for tens of thousands of educators, we automated an assignment scoring workflow in an online course by developing and refining an AI model to scan submitted assignments and score them against a rubric. This article outlines the AI model development process and…
Descriptors: Artificial Intelligence, Automation, Scoring, Microcredentials
Eran Hadas; Arnon Hershkovitz – Journal of Learning Analytics, 2025
Creativity is an imperative skill for today's learners, one that has important contributions to issues of inclusion and equity in education. Therefore, assessing creativity is of major importance in educational contexts. However, scoring creativity based on traditional tools suffers from subjectivity and is heavily time- and labour-consuming. This…
Descriptors: Creativity, Evaluation Methods, Computer Assisted Testing, Artificial Intelligence

Peer reviewed
Direct link
