NotesFAQContact Us
Collection
Advanced
Search Tips
Back to results
ERIC Number: EJ1489642
Record Type: Journal
Publication Date: 2025
Pages: 22
Abstractor: As Provided
ISBN: N/A
ISSN: ISSN-1755-6031
EISSN: N/A
Available Date: 0000-00-00
The Ethics of Deploying Large Language Models in High-Stakes Automarking
Frank Morley; Emma Walland
Research Matters, n40 p72-92 2025
The recent development of Large Language Models (LLMs) such as Claude, Gemini, and GPT has led to widespread attention on potential applications of these models. Marking exams is a domain which requires the ability to interpret and evaluate student responses (often consisting of written text), and the potential for artificial intelligence (AI) tools based on LLMs to contribute to the marking process has been noted and researched. This article imagines three scenarios where LLM-based automarking is applied to a high-stakes exam context, and examines three ethical issues identified based on current LLM technology as of June 2025 (Aloisi, 2023; Morley et al., 2025). These scenarios allow us to discuss: (1) explainability, whether automarking decisions can be explained; (2) bias, whether automarkers could make biased decisions disadvantaging certain demographic groups; and (3) adversarial attacks, how vulnerable automarkers might be to actors exploiting their vulnerabilities in order to produce a higher mark. Within the scenarios, we compare automarkers to human examiners in the context of well-functioning human marking in high-stakes settings (based on research regarding exam board marking in England). This allows us to explore ethical dimensions of automarkers based on what we know about automatic and human marking. In particular, this article argues that human examiners have a higher potential for trustworthiness over LLM-based automarkers.
Cambridge University Press & Assessment. Shaftesbury Road Cambridge CB2 8EA. Tel: 44-1223-553311; e-mail: directcs@cambridge.org; Web site: https://www.cambridgeassessment.org.uk/our-research/all-published-resources/research-matters/
Publication Type: Journal Articles; Reports - Evaluative
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Grant or Contract Numbers: N/A
Author Affiliations: N/A