Publication Date
| In 2026 | 0 |
| Since 2025 | 2 |
| Since 2022 (last 5 years) | 12 |
Descriptor
| Computer Assisted Testing | 12 |
| Prompting | 12 |
| Scoring | 6 |
| Artificial Intelligence | 4 |
| Essays | 4 |
| Automation | 3 |
| Natural Language Processing | 3 |
| Writing Evaluation | 3 |
| Accuracy | 2 |
| Elementary School Students | 2 |
| Evaluation Methods | 2 |
| More ▼ | |
Source
Author
| Angela M. Lui | 1 |
| Barbosa, Denilson | 1 |
| Belkina, Marina | 1 |
| Bond, Trevor | 1 |
| Bulut, Okan | 1 |
| Chan, Kinnie Kin Yee | 1 |
| Daniel, Scott | 1 |
| David W. Franklin | 1 |
| Denis Dumas | 1 |
| Diana Akhmedjanova | 1 |
| Epp, Carrie Demmans | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 11 |
| Journal Articles | 10 |
| Information Analyses | 1 |
| Speeches/Meeting Papers | 1 |
Education Level
| Higher Education | 3 |
| Postsecondary Education | 3 |
| Elementary Education | 2 |
| Grade 4 | 1 |
| Grade 5 | 1 |
| Intermediate Grades | 1 |
| Middle Schools | 1 |
| Secondary Education | 1 |
Audience
Location
| Australia | 1 |
| China | 1 |
| New York (Albany) | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Priti Oli; Rabin Banjade; Jeevan Chapagain; Vasile Rus – Grantee Submission, 2024
Assessing students' answers and in particular natural language answers is a crucial challenge in the field of education. Advances in transformer-based models such as Large Language Models (LLMs), have led to significant progress in various natural language tasks. Nevertheless, amidst the growing trend of evaluating LLMs across diverse tasks,…
Descriptors: Student Evaluation, Computer Assisted Testing, Artificial Intelligence, Comprehension
Mingfeng Xue; Yunting Liu; Xingyao Xiao; Mark Wilson – Journal of Educational Measurement, 2025
Prompts play a crucial role in eliciting accurate outputs from large language models (LLMs). This study examines the effectiveness of an automatic prompt engineering (APE) framework for automatic scoring in educational measurement. We collected constructed-response data from 930 students across 11 items and used human scores as the true labels. A…
Descriptors: Computer Assisted Testing, Prompting, Educational Assessment, Automation
Shin, Jinnie; Gierl, Mark J. – Journal of Applied Testing Technology, 2022
Automated Essay Scoring (AES) technologies provide innovative solutions to score the written essays with a much shorter time span and at a fraction of the current cost. Traditionally, AES emphasized the importance of capturing the "coherence" of writing because abundant evidence indicated the connection between coherence and the overall…
Descriptors: Computer Assisted Testing, Scoring, Essays, Automation
Firoozi, Tahereh; Bulut, Okan; Epp, Carrie Demmans; Naeimabadi, Ali; Barbosa, Denilson – Journal of Applied Testing Technology, 2022
Automated Essay Scoring (AES) using neural networks has helped increase the accuracy and efficiency of scoring students' written tasks. Generally, the improved accuracy of neural network approaches has been attributed to the use of modern word embedding techniques. However, which word embedding techniques produce higher accuracy in AES systems…
Descriptors: Computer Assisted Testing, Scoring, Essays, Artificial Intelligence
Wei Sun; Yousun Shin – SAGE Open, 2025
This quasi-experimental study compared the effects of Computer-assisted Hybrid Dynamic Assessment (HDA) and Interventionist Dynamic Assessment (IDA) on the development of organizational writing skills in L2, aiming to improve both assessment and instruction within Chinese EFL context. A total of 85 first-year university students in China…
Descriptors: Computer Assisted Testing, Writing Skills, Second Language Learning, English (Second Language)
Fu-Yun Yu – Interactive Learning Environments, 2024
Currently, 50 + learning systems supporting student question-generation (SQG) activities have been developed. While generating questions of different types is supported in many of these systems, systems allowing students to generate questions around a scenario (i.e. student testlet-generation, STG) are not yet available. Noting the increasing…
Descriptors: Computer Assisted Testing, Test Format, Test Construction, Test Items
Peter Organisciak; Selcuk Acar; Denis Dumas; Kelly Berthiaume – Grantee Submission, 2023
Automated scoring for divergent thinking (DT) seeks to overcome a key obstacle to creativity measurement: the effort, cost, and reliability of scoring open-ended tests. For a common test of DT, the Alternate Uses Task (AUT), the primary automated approach casts the problem as a semantic distance between a prompt and the resulting idea in a text…
Descriptors: Automation, Computer Assisted Testing, Scoring, Creative Thinking
Nikolic, Sasha; Daniel, Scott; Haque, Rezwanul; Belkina, Marina; Hassan, Ghulam M.; Grundy, Sarah; Lyden, Sarah; Neal, Peter; Sandison, Caz – European Journal of Engineering Education, 2023
ChatGPT, a sophisticated online chatbot, sent shockwaves through many sectors once reports filtered through that it could pass exams. In higher education, it has raised many questions about the authenticity of assessment and challenges in detecting plagiarism. Amongst the resulting frenetic hubbub, hints of potential opportunities in how ChatGPT…
Descriptors: Artificial Intelligence, Performance Based Assessment, Engineering Education, Integrity
David W. Franklin; Jason Bryer; Angela M. Lui; Heidi L. Andrade; Diana Akhmedjanova – Online Learning, 2022
The purpose of this study is to examine the effects of nudges on online college students' use of the Diagnostic Assessment and Achievement of College Skills (DAACS), a suite of free, online assessments, feedback, and resources designed to optimize student success in college. The results indicate that the nudges had an effect on students'…
Descriptors: Undergraduate Students, Educational Diagnosis, Academic Achievement, Academic Ability
Chan, Kinnie Kin Yee; Bond, Trevor; Yan, Zi – Language Testing, 2023
We investigated the relationship between the scores assigned by an Automated Essay Scoring (AES) system, the Intelligent Essay Assessor (IEA), and grades allocated by trained, professional human raters to English essay writing by instigating two procedures novel to written-language assessment: the logistic transformation of AES raw scores into…
Descriptors: Computer Assisted Testing, Essays, Scoring, Scores
Suzumura, Nana – Language Assessment Quarterly, 2022
The present study is part of a larger mixed methods project that investigated the speaking section of the Advanced Placement (AP) Japanese Language and Culture Exam. It investigated assumptions for the evaluation inference through a content analysis of test taker responses. Results of the content analysis were integrated with those of a many-facet…
Descriptors: Content Analysis, Test Wiseness, Advanced Placement, Computer Assisted Testing
Wilson, Joshua; Wen, Huijing – Elementary School Journal, 2022
This study investigated fourth and fifth graders' metacognitive knowledge about writing and its relationship to writing performance to help identify areas that might be leveraged when designing effective writing instruction. Students' metacognitive knowledge was probed using a 30-minute informative writing prompt requiring students to teach their…
Descriptors: Elementary School Students, Metacognition, Writing Attitudes, Writing (Composition)

Peer reviewed
Direct link
