NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 4 results Save | Export
Peer reviewed Peer reviewed
Punya Mishra; Danielle S. McNamara; Gregory Goodwin; Diego Zapata-Rivera – Grantee Submission, 2025
The advent of Large Language Models (LLMs) has fundamentally disrupted our thinking about educational technology. Their ability to engage in natural dialogue, provide contextually relevant responses, and adapt to learner needs has led many to envision them as powerful tools for personalized learning. This emergence raises important questions about…
Descriptors: Artificial Intelligence, Intelligent Tutoring Systems, Technology Uses in Education, Educational Technology
Peer reviewed Peer reviewed
Direct linkDirect link
Andreea Dutulescu; Stefan Ruseti; Denis Iorga; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2025
Automated multiple-choice question (MCQ) generation is valuable for scalable assessment and enhanced learning experiences. How-ever, existing MCQ generation methods face challenges in ensuring plausible distractors and maintaining answer consistency. This paper intro-duces a method for MCQ generation that integrates reasoning-based explanations…
Descriptors: Automation, Computer Assisted Testing, Multiple Choice Tests, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Ishrat Ahmed; Wenxing Liu; Rod D. Roscoe; Elizabeth Reilley; Danielle S. McNamara – Grantee Submission, 2025
Large language models (LLMs) are increasingly being utilized to develop tools and services in various domains, including education. However, due to the nature of the training data, these models are susceptible to inherent social or cognitive biases, which can influence their outputs. Furthermore, their handling of critical topics, such as privacy…
Descriptors: Artificial Intelligence, Natural Language Processing, Computer Mediated Communication, College Students
Peer reviewed Peer reviewed
Direct linkDirect link
Linh Huynh; Danielle S. McNamara – Grantee Submission, 2025
We conducted two experiments to assess the alignment between Generative AI (GenAI) text personalization and hypothetical readers' profiles. In Experiment 1, four LLMs (i.e., Claude 3.5 Sonnet; Llama; Gemini Pro 1.5; ChatGPT 4) were prompted to tailor 10 science texts (i.e., biology, chemistry, physics) to accommodate four different profiles…
Descriptors: Natural Language Processing, Profiles, Individual Differences, Semantics