NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Showing 1 to 15 of 221 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Peter Rowlett; Chris Graham; Christian Lawson-Perfect – International Journal of Mathematical Education in Science and Technology, 2025
Partially automated assessment is implemented via the 'Printable worksheet' mode in the Numbas e-assessment system to create a mathematical modelling worksheet which is individualised with random parameters but completed and marked as if it were a non-automated piece of coursework, preserving validity while reducing the risk of academic misconduct…
Descriptors: Automation, Worksheets, Mathematical Models, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Ikkyu Choi; Matthew S. Johnson – Journal of Educational Measurement, 2025
Automated scoring systems provide multiple benefits but also pose challenges, notably potential bias. Various methods exist to evaluate these algorithms and their outputs for bias. Upon detecting bias, the next logical step is to investigate its cause, often by examining feature distributions. Recently, Johnson and McCaffrey proposed an…
Descriptors: Prediction, Bias, Automation, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Po-Chun Huang; Ying-Hong Chan; Ching-Yu Yang; Hung-Yuan Chen; Yao-Chung Fan – IEEE Transactions on Learning Technologies, 2024
Question generation (QG) task plays a crucial role in adaptive learning. While significant QG performance advancements are reported, the existing QG studies are still far from practical usage. One point that needs strengthening is to consider the generation of question group, which remains untouched. For forming a question group, intrafactors…
Descriptors: Automation, Test Items, Computer Assisted Testing, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Ishaya Gambo; Faith-Jane Abegunde; Omobola Gambo; Roseline Oluwaseun Ogundokun; Akinbowale Natheniel Babatunde; Cheng-Chi Lee – Education and Information Technologies, 2025
The current educational system relies heavily on manual grading, posing challenges such as delayed feedback and grading inaccuracies. Automated grading tools (AGTs) offer solutions but come with limitations. To address this, "GRAD-AI" is introduced, an advanced AGT that combines automation with teacher involvement for precise grading,…
Descriptors: Automation, Grading, Artificial Intelligence, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Danielle Lottridge; Davis Dimalen; Gerald Weber – ACM Transactions on Computing Education, 2025
Automated assessment is well-established within computer science courses but largely absent from human--computer interaction courses. Automating the assessment of human--computer interaction (HCI) is challenging because the coursework tends not to be computational but rather highly creative, such as designing and implementing interactive…
Descriptors: Computer Science Education, Computer Assisted Testing, Automation, Man Machine Systems
Peer reviewed Peer reviewed
Direct linkDirect link
Luyang Fang; Gyeonggeon Lee; Xiaoming Zhai – Journal of Educational Measurement, 2025
Machine learning-based automatic scoring faces challenges with imbalanced student responses across scoring categories. To address this, we introduce a novel text data augmentation framework that leverages GPT-4, a generative large language model specifically tailored for imbalanced datasets in automatic scoring. Our experimental dataset consisted…
Descriptors: Computer Assisted Testing, Artificial Intelligence, Automation, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Ramnarain-Seetohul, Vidasha; Bassoo, Vandana; Rosunally, Yasmine – Education and Information Technologies, 2022
In automated essay scoring (AES) systems, similarity techniques are used to compute the score for student answers. Several methods to compute similarity have emerged over the years. However, only a few of them have been widely used in the AES domain. This work shows the findings of a ten-year review on similarity techniques applied in AES systems…
Descriptors: Computer Assisted Testing, Essays, Scoring, Automation
Peer reviewed Peer reviewed
Direct linkDirect link
Mingfeng Xue; Yunting Liu; Xingyao Xiao; Mark Wilson – Journal of Educational Measurement, 2025
Prompts play a crucial role in eliciting accurate outputs from large language models (LLMs). This study examines the effectiveness of an automatic prompt engineering (APE) framework for automatic scoring in educational measurement. We collected constructed-response data from 930 students across 11 items and used human scores as the true labels. A…
Descriptors: Computer Assisted Testing, Prompting, Educational Assessment, Automation
Peer reviewed Peer reviewed
Andreea Dutulescu; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Assessing the difficulty of reading comprehension questions is crucial to educational methodologies and language understanding technologies. Traditional methods of assessing question difficulty rely frequently on human judgments or shallow metrics, often failing to accurately capture the intricate cognitive demands of answering a question. This…
Descriptors: Difficulty Level, Reading Tests, Test Items, Reading Comprehension
Peer reviewed Peer reviewed
Direct linkDirect link
Guher Gorgun; Okan Bulut – Education and Information Technologies, 2024
In light of the widespread adoption of technology-enhanced learning and assessment platforms, there is a growing demand for innovative, high-quality, and diverse assessment questions. Automatic Question Generation (AQG) has emerged as a valuable solution, enabling educators and assessment developers to efficiently produce a large volume of test…
Descriptors: Computer Assisted Testing, Test Construction, Test Items, Automation
Peer reviewed Peer reviewed
Direct linkDirect link
Andreea Dutulescu; Stefan Ruseti; Denis Iorga; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2025
Automated multiple-choice question (MCQ) generation is valuable for scalable assessment and enhanced learning experiences. How-ever, existing MCQ generation methods face challenges in ensuring plausible distractors and maintaining answer consistency. This paper intro-duces a method for MCQ generation that integrates reasoning-based explanations…
Descriptors: Automation, Computer Assisted Testing, Multiple Choice Tests, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Wesley Morris; Langdon Holmes; Joon Suh Choi; Scott Crossley – International Journal of Artificial Intelligence in Education, 2025
Recent developments in the field of artificial intelligence allow for improved performance in the automated assessment of extended response items in mathematics, potentially allowing for the scoring of these items cheaply and at scale. This study details the grand prize-winning approach to developing large language models (LLMs) to automatically…
Descriptors: Automation, Computer Assisted Testing, Mathematics Tests, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Ulrike Padó; Yunus Eryilmaz; Larissa Kirschner – International Journal of Artificial Intelligence in Education, 2024
Short-Answer Grading (SAG) is a time-consuming task for teachers that automated SAG models have long promised to make easier. However, there are three challenges for their broad-scale adoption: A technical challenge regarding the need for high-quality models, which is exacerbated for languages with fewer resources than English; a usability…
Descriptors: Grading, Automation, Test Format, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Putnikovic, Marko; Jovanovic, Jelena – IEEE Transactions on Learning Technologies, 2023
Automatic grading of short answers is an important task in computer-assisted assessment (CAA). Recently, embeddings, as semantic-rich textual representations, have been increasingly used to represent short answers and predict the grade. Despite the recent trend of applying embeddings in automatic short answer grading (ASAG), there are no…
Descriptors: Automation, Computer Assisted Testing, Grading, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Tan, Hongye; Wang, Chong; Duan, Qinglong; Lu, Yu; Zhang, Hu; Li, Ru – Interactive Learning Environments, 2023
Automatic short answer grading (ASAG) is a challenging task that aims to predict a score for a given student response. Previous works on ASAG mainly use nonneural or neural methods. However, the former depends on handcrafted features and is limited by its inflexibility and high cost, and the latter ignores global word cooccurrence in a corpus and…
Descriptors: Automation, Grading, Computer Assisted Testing, Graphs
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  15