Publication Date
| In 2026 | 0 |
| Since 2025 | 8 |
| Since 2022 (last 5 years) | 34 |
| Since 2017 (last 10 years) | 42 |
| Since 2007 (last 20 years) | 46 |
Descriptor
| Artificial Intelligence | 46 |
| Automation | 46 |
| Computer Assisted Testing | 46 |
| Scoring | 20 |
| Feedback (Response) | 17 |
| Natural Language Processing | 16 |
| Foreign Countries | 11 |
| Essays | 9 |
| Grading | 9 |
| Prediction | 9 |
| Technology Uses in Education | 8 |
| More ▼ | |
Source
Author
Publication Type
Education Level
Audience
| Policymakers | 1 |
Location
| Australia | 2 |
| China | 2 |
| Florida | 2 |
| Indonesia | 2 |
| Israel | 2 |
| Netherlands | 2 |
| Pennsylvania | 2 |
| Spain | 2 |
| Turkey | 2 |
| United Kingdom (England) | 2 |
| Asia | 1 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
| Massachusetts Comprehensive… | 1 |
| National Assessment of… | 1 |
| Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Ishaya Gambo; Faith-Jane Abegunde; Omobola Gambo; Roseline Oluwaseun Ogundokun; Akinbowale Natheniel Babatunde; Cheng-Chi Lee – Education and Information Technologies, 2025
The current educational system relies heavily on manual grading, posing challenges such as delayed feedback and grading inaccuracies. Automated grading tools (AGTs) offer solutions but come with limitations. To address this, "GRAD-AI" is introduced, an advanced AGT that combines automation with teacher involvement for precise grading,…
Descriptors: Automation, Grading, Artificial Intelligence, Computer Assisted Testing
Luyang Fang; Gyeonggeon Lee; Xiaoming Zhai – Journal of Educational Measurement, 2025
Machine learning-based automatic scoring faces challenges with imbalanced student responses across scoring categories. To address this, we introduce a novel text data augmentation framework that leverages GPT-4, a generative large language model specifically tailored for imbalanced datasets in automatic scoring. Our experimental dataset consisted…
Descriptors: Computer Assisted Testing, Artificial Intelligence, Automation, Scoring
Peer reviewedAndreea Dutulescu; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Assessing the difficulty of reading comprehension questions is crucial to educational methodologies and language understanding technologies. Traditional methods of assessing question difficulty rely frequently on human judgments or shallow metrics, often failing to accurately capture the intricate cognitive demands of answering a question. This…
Descriptors: Difficulty Level, Reading Tests, Test Items, Reading Comprehension
Wesley Morris; Langdon Holmes; Joon Suh Choi; Scott Crossley – International Journal of Artificial Intelligence in Education, 2025
Recent developments in the field of artificial intelligence allow for improved performance in the automated assessment of extended response items in mathematics, potentially allowing for the scoring of these items cheaply and at scale. This study details the grand prize-winning approach to developing large language models (LLMs) to automatically…
Descriptors: Automation, Computer Assisted Testing, Mathematics Tests, Scoring
Putnikovic, Marko; Jovanovic, Jelena – IEEE Transactions on Learning Technologies, 2023
Automatic grading of short answers is an important task in computer-assisted assessment (CAA). Recently, embeddings, as semantic-rich textual representations, have been increasingly used to represent short answers and predict the grade. Despite the recent trend of applying embeddings in automatic short answer grading (ASAG), there are no…
Descriptors: Automation, Computer Assisted Testing, Grading, Natural Language Processing
Tan, Hongye; Wang, Chong; Duan, Qinglong; Lu, Yu; Zhang, Hu; Li, Ru – Interactive Learning Environments, 2023
Automatic short answer grading (ASAG) is a challenging task that aims to predict a score for a given student response. Previous works on ASAG mainly use nonneural or neural methods. However, the former depends on handcrafted features and is limited by its inflexibility and high cost, and the latter ignores global word cooccurrence in a corpus and…
Descriptors: Automation, Grading, Computer Assisted Testing, Graphs
Wang, Hei-Chia; Maslim, Martinus; Kan, Chia-Hao – Education and Information Technologies, 2023
Distance learning frees the learning process from spatial constraints. Each mode of distance learning, including synchronous and asynchronous learning, has disadvantages. In synchronous learning, students have network bandwidth and noise concerns, but in asynchronous learning, they have fewer opportunities for engagement, such as asking questions.…
Descriptors: Automation, Artificial Intelligence, Computer Assisted Testing, Asynchronous Communication
Stefan Ruseti; Ionut Paraschiv; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Automated Essay Scoring (AES) is a well-studied problem in Natural Language Processing applied in education. Solutions vary from handcrafted linguistic features to large Transformer-based models, implying a significant effort in feature extraction and model implementation. We introduce a novel Automated Machine Learning (AutoML) pipeline…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essays
Stefan Ruseti; Ionut Paraschiv; Mihai Dascalu; Danielle S. McNamara – International Journal of Artificial Intelligence in Education, 2024
Automated Essay Scoring (AES) is a well-studied problem in Natural Language Processing applied in education. Solutions vary from handcrafted linguistic features to large Transformer-based models, implying a significant effort in feature extraction and model implementation. We introduce a novel Automated Machine Learning (AutoML) pipeline…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essays
Daniel Lupiya Mpolomoka – Pedagogical Research, 2025
Overview: This systematic review explores the utilization of artificial intelligence (AI) for assessment, grading, and feedback in higher education. The review aims to establish how AI technologies enhance efficiency, scalability, and personalized learning experiences in educational settings, while addressing associated challenges that arise due…
Descriptors: Artificial Intelligence, Higher Education, Evaluation Methods, Literature Reviews
Buczak, Philip; Huang, He; Forthmann, Boris; Doebler, Philipp – Journal of Creative Behavior, 2023
Traditionally, researchers employ human raters for scoring responses to creative thinking tasks. Apart from the associated costs this approach entails two potential risks. First, human raters can be subjective in their scoring behavior (inter-rater-variance). Second, individual raters are prone to inconsistent scoring patterns…
Descriptors: Computer Assisted Testing, Scoring, Automation, Creative Thinking
Shin, Jinnie; Gierl, Mark J. – Journal of Applied Testing Technology, 2022
Automated Essay Scoring (AES) technologies provide innovative solutions to score the written essays with a much shorter time span and at a fraction of the current cost. Traditionally, AES emphasized the importance of capturing the "coherence" of writing because abundant evidence indicated the connection between coherence and the overall…
Descriptors: Computer Assisted Testing, Scoring, Essays, Automation
Dongkwang Shin; Jang Ho Lee – ELT Journal, 2024
Although automated item generation has gained a considerable amount of attention in a variety of fields, it is still a relatively new technology in ELT contexts. Therefore, the present article aims to provide an accessible introduction to this powerful resource for language teachers based on a review of the available research. Particularly, it…
Descriptors: Language Tests, Artificial Intelligence, Test Items, Automation
McCaffrey, Daniel F.; Casabianca, Jodi M.; Ricker-Pedley, Kathryn L.; Lawless, René R.; Wendler, Cathy – ETS Research Report Series, 2022
This document describes a set of best practices for developing, implementing, and maintaining the critical process of scoring constructed-response tasks. These practices address both the use of human raters and automated scoring systems as part of the scoring process and cover the scoring of written, spoken, performance, or multimodal responses.…
Descriptors: Best Practices, Scoring, Test Format, Computer Assisted Testing
Samuel S. Davidson – ProQuest LLC, 2024
Automated corrective feedback (ACF), in which a computer system helps language learners identify and correct errors in their writing or speech, is considered an important tool for language instruction by many researchers. Such systems allow learners to correct their own mistakes, thereby reducing teacher workload and potentially preventing issues…
Descriptors: Computer Assisted Testing, Automation, Student Evaluation, Feedback (Response)

Direct link
