Publication Date
| In 2026 | 0 |
| Since 2025 | 1 |
| Since 2022 (last 5 years) | 2 |
| Since 2017 (last 10 years) | 2 |
| Since 2007 (last 20 years) | 2 |
Descriptor
| Creativity | 2 |
| Models | 2 |
| Automation | 1 |
| Computation | 1 |
| Correlation | 1 |
| Cost Effectiveness | 1 |
| Creative Thinking | 1 |
| Data | 1 |
| Evaluators | 1 |
| German | 1 |
| Item Response Theory | 1 |
| More ▼ | |
Author
| Benjamin Goecke | 2 |
| Boris Forthmann | 2 |
| Kurt Haim | 1 |
| Paul V. DiStefano | 1 |
| Roger Beaty | 1 |
| Roger E. Beaty | 1 |
| Wolfgang Aschauer | 1 |
Publication Type
| Journal Articles | 2 |
| Reports - Research | 2 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Benjamin Goecke; Paul V. DiStefano; Wolfgang Aschauer; Kurt Haim; Roger Beaty; Boris Forthmann – Journal of Creative Behavior, 2024
Automated scoring is a current hot topic in creativity research. However, most research has focused on the English language and popular verbal creative thinking tasks, such as the alternate uses task. Therefore, in this study, we present a large language model approach for automated scoring of a scientific creative thinking task that assesses…
Descriptors: Creativity, Creative Thinking, Scoring, Automation
Boris Forthmann; Benjamin Goecke; Roger E. Beaty – Creativity Research Journal, 2025
Human ratings are ubiquitous in creativity research. Yet, the process of rating responses to creativity tasks -- typically several hundred or thousands of responses, per rater -- is often time-consuming and expensive. Planned missing data designs, where raters only rate a subset of the total number of responses, have been recently proposed as one…
Descriptors: Creativity, Research, Researchers, Research Methodology

Peer reviewed
Direct link
