Publication Date
| In 2026 | 0 |
| Since 2025 | 11 |
Descriptor
| Natural Language Processing | 10 |
| Artificial Intelligence | 6 |
| Automation | 2 |
| Classification | 2 |
| Computational Linguistics | 2 |
| Computer Software | 2 |
| Connected Discourse | 2 |
| Correlation | 2 |
| Cues | 2 |
| Data Use | 2 |
| Educational Technology | 2 |
| More ▼ | |
Source
| Grantee Submission | 11 |
Author
| Danielle S. McNamara | 5 |
| Gautam Biswas | 2 |
| Linh Huynh | 2 |
| Namrata Srivastava | 2 |
| Rod D. Roscoe | 2 |
| Amanda Goodwin | 1 |
| Andreea Dutulescu | 1 |
| Andrew Avitabile | 1 |
| Andrew Kwok | 1 |
| Andrew Potter | 1 |
| Angela Eeds | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 10 |
| Journal Articles | 4 |
| Speeches/Meeting Papers | 3 |
| Reports - Evaluative | 1 |
Education Level
Audience
Location
| Texas | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| Digit Span Test | 1 |
| Stanford Early School… | 1 |
| Wechsler Preschool and… | 1 |
What Works Clearinghouse Rating
Large Language Models and Intelligent Tutoring Systems: Conflicting Paradigms and Possible Solutions
Peer reviewedPunya Mishra; Danielle S. McNamara; Gregory Goodwin; Diego Zapata-Rivera – Grantee Submission, 2025
The advent of Large Language Models (LLMs) has fundamentally disrupted our thinking about educational technology. Their ability to engage in natural dialogue, provide contextually relevant responses, and adapt to learner needs has led many to envision them as powerful tools for personalized learning. This emergence raises important questions about…
Descriptors: Artificial Intelligence, Intelligent Tutoring Systems, Technology Uses in Education, Educational Technology
Peer reviewedClayton Cohn; Surya Rayala; Caitlin Snyder; Joyce Horn Fonteles; Shruti Jain; Naveeduddin Mohammed; Umesh Timalsina; Sarah K. Burriss; Ashwin T. S.; Namrata Srivastava; Menton Deweese; Angela Eeds; Gautam Biswas – Grantee Submission, 2025
Collaborative dialogue offers rich insights into students' learning and critical thinking. This is essential for adapting pedagogical agents to students' learning and problem-solving skills in STEM+C settings. While large language models (LLMs) facilitate dynamic pedagogical interactions, potential hallucinations can undermine confidence, trust,…
Descriptors: STEM Education, Computer Science Education, Artificial Intelligence, Natural Language Processing
Andreea Dutulescu; Stefan Ruseti; Denis Iorga; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2025
Automated multiple-choice question (MCQ) generation is valuable for scalable assessment and enhanced learning experiences. How-ever, existing MCQ generation methods face challenges in ensuring plausible distractors and maintaining answer consistency. This paper intro-duces a method for MCQ generation that integrates reasoning-based explanations…
Descriptors: Automation, Computer Assisted Testing, Multiple Choice Tests, Natural Language Processing
Wen-Chiang Ivan Lim; Neil T. Heffernan III; Ivan Eroshenko; Wai Khumwang; Pei-Chen Chan – Grantee Submission, 2025
Intelligent tutoring systems are increasingly used in schools, providing teachers with valuable analytics on student learning. However, many teachers lack the time to review these reports in detail due to heavy workloads, and some face challenges with data literacy. This project investigates the use of large language models (LLMs) to generate…
Descriptors: Intelligent Tutoring Systems, Natural Language Processing, Assignments, Learning Management Systems
Ishrat Ahmed; Wenxing Liu; Rod D. Roscoe; Elizabeth Reilley; Danielle S. McNamara – Grantee Submission, 2025
Large language models (LLMs) are increasingly being utilized to develop tools and services in various domains, including education. However, due to the nature of the training data, these models are susceptible to inherent social or cognitive biases, which can influence their outputs. Furthermore, their handling of critical topics, such as privacy…
Descriptors: Artificial Intelligence, Natural Language Processing, Computer Mediated Communication, College Students
Linh Huynh; Danielle S. McNamara – Grantee Submission, 2025
Four versions of science and history texts were tailored to diverse hypothetical reader profiles (high and low reading skills and domain knowledge), generated by four Large Language Models (i.e., Claude, Llama, ChatGPT, and Gemini). The Natural Language Processing (NLP) technique was applied to examine variations in Large Language Model (LLM) text…
Descriptors: Artificial Intelligence, Natural Language Processing, Textbook Evaluation, Individualized Instruction
Eduardo Davalos; Yike Zhang; Namrata Srivastava; Jorge Alberto Salas; Sara McFadden; Sun-Joo Cho; Gautam Biswas; Amanda Goodwin – Grantee Submission, 2025
Reading assessments are essential for enhancing students' comprehension, yet many EdTech applications focus mainly on outcome-based metrics, providing limited insights into student behavior and cognition. This study investigates the use of multimodal data sources -- including eye-tracking data, learning outcomes, assessment content, and teaching…
Descriptors: Natural Language Processing, Learning Analytics, Reading Tests, Reading Comprehension
Linh Huynh; Danielle S. McNamara – Grantee Submission, 2025
We conducted two experiments to assess the alignment between Generative AI (GenAI) text personalization and hypothetical readers' profiles. In Experiment 1, four LLMs (i.e., Claude 3.5 Sonnet; Llama; Gemini Pro 1.5; ChatGPT 4) were prompted to tailor 10 science texts (i.e., biology, chemistry, physics) to accommodate four different profiles…
Descriptors: Natural Language Processing, Profiles, Individual Differences, Semantics
Andrew Potter; Mitchell Shortt; Maria Goldshtein; Rod D. Roscoe – Grantee Submission, 2025
Broadly defined, academic language (AL) is a set of lexical-grammatical norms and registers commonly used in educational and academic discourse. Mastery of academic language in writing is an important aspect of writing instruction and assessment. The purpose of this study was to use Natural Language Processing (NLP) tools to examine the extent to…
Descriptors: Academic Language, Natural Language Processing, Grammar, Vocabulary Skills
Brendan Bartanen; Andrew Kwok; Andrew Avitabile; Brian Heseung Kim – Grantee Submission, 2025
Heightened concerns about the health of the teaching profession highlight the importance of studying the early teacher pipeline. This exploratory, descriptive article examines preservice teachers' expressed motivation for pursuing a teaching career. Using data from a large teacher education program in Texas, we use a natural language processing…
Descriptors: Career Choice, Teaching (Occupation), Teacher Education Programs, Preservice Teachers
Taylor Lesner; Ben Clarke; Derek Kosty; Geovanna Rodriguez; Elizabeth L. Budd; Christian Doabler – Grantee Submission, 2025
This secondary analysis of data from a randomized control trial of an early mathematics intervention, ROOTS, explored whether patterns of intervention response were best categorized by the typical response/non-response binary or a more complex framework with additional response profiles. Participants included kindergarten students at risk for…
Descriptors: Mathematics Instruction, Response to Intervention, At Risk Students, Kindergarten

Direct link
