Publication Date
| In 2026 | 0 |
| Since 2025 | 9 |
Descriptor
| Computer Assisted Testing | 9 |
| Scoring | 9 |
| Artificial Intelligence | 4 |
| Computer Software | 3 |
| Creative Thinking | 3 |
| Creativity Tests | 3 |
| Evaluation Methods | 3 |
| Test Items | 3 |
| Accuracy | 2 |
| Automation | 2 |
| Correlation | 2 |
| More ▼ | |
Source
| Journal of Creative Behavior | 2 |
| Journal of Educational… | 2 |
| ACT Education Corp. | 1 |
| British Educational Research… | 1 |
| International Electronic… | 1 |
| International Journal of… | 1 |
| Journal of Learning Analytics | 1 |
Author
| Alex J. Mechaber | 1 |
| Ann Arthur | 1 |
| Arnon Hershkovitz | 1 |
| Bhashithe Abeysinghe | 1 |
| Brian E. Clauser | 1 |
| Chen Qiu | 1 |
| Chi-Yu Huang | 1 |
| Congning Ni | 1 |
| Denis Dumas | 1 |
| Dongmei Li | 1 |
| Eran Hadas | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 9 |
| Journal Articles | 8 |
Education Level
| Secondary Education | 3 |
| Higher Education | 2 |
| Junior High Schools | 2 |
| Middle Schools | 2 |
| Postsecondary Education | 2 |
| Elementary Education | 1 |
| Grade 8 | 1 |
| Grade 9 | 1 |
| High Schools | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
| National Assessment of… | 2 |
| ACT Assessment | 1 |
| Program for International… | 1 |
| Torrance Tests of Creative… | 1 |
What Works Clearinghouse Rating
Selcuk Acar; Peter Organisciak; Denis Dumas – Journal of Creative Behavior, 2025
In this three-study investigation, we applied various approaches to score drawings created in response to both Form A and Form B of the Torrance Tests of Creative Thinking-Figural (broadly TTCT-F) as well as the Multi-Trial Creative Ideation task (MTCI). We focused on TTCT-F in Study 1, and utilizing a random forest classifier, we achieved 79% and…
Descriptors: Scoring, Computer Assisted Testing, Models, Correlation
Peter Baldwin; Victoria Yaneva; Kai North; Le An Ha; Yiyun Zhou; Alex J. Mechaber; Brian E. Clauser – Journal of Educational Measurement, 2025
Recent developments in the use of large-language models have led to substantial improvements in the accuracy of content-based automated scoring of free-text responses. The reported accuracy levels suggest that automated systems could have widespread applicability in assessment. However, before they are used in operational testing, other aspects of…
Descriptors: Artificial Intelligence, Scoring, Computational Linguistics, Accuracy
Wesley Morris; Langdon Holmes; Joon Suh Choi; Scott Crossley – International Journal of Artificial Intelligence in Education, 2025
Recent developments in the field of artificial intelligence allow for improved performance in the automated assessment of extended response items in mathematics, potentially allowing for the scoring of these items cheaply and at scale. This study details the grand prize-winning approach to developing large language models (LLMs) to automatically…
Descriptors: Automation, Computer Assisted Testing, Mathematics Tests, Scoring
Jonas Flodén – British Educational Research Journal, 2025
This study compares how the generative AI (GenAI) large language model (LLM) ChatGPT performs in grading university exams compared to human teachers. Aspects investigated include consistency, large discrepancies and length of answer. Implications for higher education, including the role of teachers and ethics, are also discussed. Three…
Descriptors: College Faculty, Artificial Intelligence, Comparative Testing, Scoring
Mathias Benedek; Roger E. Beaty – Journal of Creative Behavior, 2025
The PISA assessment 2022 of creative thinking was a moonshot effort that introduced significant advancements over existing creativity tests, including a broad range of domains (written, visual, social, and scientific), implementation in many languages, and sophisticated scoring methods. PISA 2022 demonstrated the general feasibility of assessing…
Descriptors: Creative Thinking, Creativity, Creativity Tests, Scoring
Wallace N. Pinto Jr.; Jinnie Shin – Journal of Educational Measurement, 2025
In recent years, the application of explainability techniques to automated essay scoring and automated short-answer grading (ASAG) models, particularly those based on transformer architectures, has gained significant attention. However, the reliability and consistency of these techniques remain underexplored. This study systematically investigates…
Descriptors: Automation, Grading, Computer Assisted Testing, Scoring
Eran Hadas; Arnon Hershkovitz – Journal of Learning Analytics, 2025
Creativity is an imperative skill for today's learners, one that has important contributions to issues of inclusion and equity in education. Therefore, assessing creativity is of major importance in educational contexts. However, scoring creativity based on traditional tools suffers from subjectivity and is heavily time- and labour-consuming. This…
Descriptors: Creativity, Evaluation Methods, Computer Assisted Testing, Artificial Intelligence
Dongmei Li; Shalini Kapoor; Ann Arthur; Chi-Yu Huang; YoungWoo Cho; Chen Qiu; Hongling Wang – ACT Education Corp., 2025
Starting in April 2025, ACT will introduce enhanced forms of the ACT® test for national online testing, with a full rollout to all paper and online test takers in national, state and district, and international test administrations by Spring 2026. ACT introduced major updates by changing the test lengths and testing times, providing more time per…
Descriptors: College Entrance Examinations, Testing, Change, Scoring
Congning Ni; Bhashithe Abeysinghe; Juanita Hicks – International Electronic Journal of Elementary Education, 2025
The National Assessment of Educational Progress (NAEP), often referred to as The Nation's Report Card, offers a window into the state of U.S. K-12 education system. Since 2017, NAEP has transitioned to digital assessments, opening new research opportunities that were previously impossible. Process data tracks students' interactions with the…
Descriptors: Reaction Time, Multiple Choice Tests, Behavior Change, National Competency Tests

Peer reviewed
Direct link
