NotesFAQContact Us
Collection
Advanced
Search Tips
Back to results
Peer reviewed Peer reviewed
Direct linkDirect link
ERIC Number: EJ1478484
Record Type: Journal
Publication Date: 2025-Jul
Pages: 35
Abstractor: As Provided
ISBN: N/A
ISSN: ISSN-1069-4730
EISSN: EISSN-2168-9830
Available Date: 2025-05-22
Methodological Foundations for Artificial Intelligence-Driven Survey Question Generation
Journal of Engineering Education, v114 n3 e70012 2025
Background: This study investigates the use of large language models to create adaptive, contextually relevant survey questions, aiming to enhance data quality in educational research without limiting scalability. Purpose: We provide step-by-step methods to develop a dynamic survey instrument, driven by artificial intelligence (AI), and introduce the Synthetic Question-Response Analysis (SQRA) framework, a methodology designed to help evaluate AI-generated questions before deployment with human participants. Design: We examine the questions generated by our survey instrument, as well as compare AI-to-AI, generated through our SQRA framework, with AI-to-human interactions. Activity theory provides a theoretical lens to examine the dynamic interactions between AI and participants, highlighting the mutual influence within the survey tool. Results: We found that AI-generated questions were contextually relevant and adaptable, successfully incorporating course-specific references. However, issues such as redundant phrasing, double-barreled questions, and jargon affected the clarity of the questions. Although the SQRA framework exhibited limitations in replicating human response variability, its iterative refinement process proved effective in improving question quality, reinforcing the utility of this approach for enhancing AI-driven surveys. Conclusions: While AI-driven question generation can enhance the scalability and personalization of open-ended survey prompts, more research is needed to establish best practices for high-quality educational research. The SQRA framework demonstrated practical utility for prompt refinement and initial validation of AI-generated survey content, but it is not capable of replicating human responses. We highlight the importance of iterative prompt engineering, ethical considerations, and the need for methodological advancements in the development of trustworthy AI-driven survey instruments for educational research.
Wiley. Available from: John Wiley & Sons, Inc. 111 River Street, Hoboken, NJ 07030. Tel: 800-835-6770; e-mail: cs-journals@wiley.com; Web site: https://www-wiley-com.bibliotheek.ehb.be/en-us
Publication Type: Journal Articles; Reports - Research
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Grant or Contract Numbers: N/A
Author Affiliations: 1Engineering Education Program, University of Colorado Boulder, Boulder, Colorado, USA; 2Meinig School of Biomedical Engineering, Cornell University, Ithaca, New York, USA