Publication Date
| In 2026 | 0 |
| Since 2025 | 10 |
| Since 2022 (last 5 years) | 44 |
| Since 2017 (last 10 years) | 111 |
| Since 2007 (last 20 years) | 141 |
Descriptor
| Natural Language Processing | 141 |
| Intelligent Tutoring Systems | 52 |
| Artificial Intelligence | 48 |
| Reading Comprehension | 42 |
| Educational Technology | 30 |
| Essays | 29 |
| Automation | 28 |
| Semantics | 27 |
| Feedback (Response) | 25 |
| Computational Linguistics | 24 |
| Scoring | 24 |
| More ▼ | |
Source
| Grantee Submission | 141 |
Author
Publication Type
| Reports - Research | 113 |
| Speeches/Meeting Papers | 75 |
| Journal Articles | 34 |
| Reports - Descriptive | 17 |
| Reports - Evaluative | 11 |
| Tests/Questionnaires | 3 |
| Information Analyses | 2 |
Education Level
Audience
| Researchers | 1 |
| Teachers | 1 |
Location
| Pennsylvania | 4 |
| California | 3 |
| Arizona (Phoenix) | 2 |
| Pennsylvania (Pittsburgh) | 2 |
| Africa | 1 |
| Arizona | 1 |
| California (Long Beach) | 1 |
| Canada | 1 |
| Florida | 1 |
| Illinois | 1 |
| Kenya | 1 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
| Gates MacGinitie Reading Tests | 7 |
| Flesch Kincaid Grade Level… | 2 |
| Writing Apprehension Test | 2 |
| Flesch Reading Ease Formula | 1 |
| Woodcock Johnson Tests of… | 1 |
What Works Clearinghouse Rating
Large Language Models and Intelligent Tutoring Systems: Conflicting Paradigms and Possible Solutions
Peer reviewedPunya Mishra; Danielle S. McNamara; Gregory Goodwin; Diego Zapata-Rivera – Grantee Submission, 2025
The advent of Large Language Models (LLMs) has fundamentally disrupted our thinking about educational technology. Their ability to engage in natural dialogue, provide contextually relevant responses, and adapt to learner needs has led many to envision them as powerful tools for personalized learning. This emergence raises important questions about…
Descriptors: Artificial Intelligence, Intelligent Tutoring Systems, Technology Uses in Education, Educational Technology
Peer reviewedClayton Cohn; Surya Rayala; Caitlin Snyder; Joyce Horn Fonteles; Shruti Jain; Naveeduddin Mohammed; Umesh Timalsina; Sarah K. Burriss; Ashwin T. S.; Namrata Srivastava; Menton Deweese; Angela Eeds; Gautam Biswas – Grantee Submission, 2025
Collaborative dialogue offers rich insights into students' learning and critical thinking. This is essential for adapting pedagogical agents to students' learning and problem-solving skills in STEM+C settings. While large language models (LLMs) facilitate dynamic pedagogical interactions, potential hallucinations can undermine confidence, trust,…
Descriptors: STEM Education, Computer Science Education, Artificial Intelligence, Natural Language Processing
Andreea Dutulescu; Stefan Ruseti; Denis Iorga; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
The process of generating challenging and appropriate distractors for multiple-choice questions is a complex and time-consuming task. Existing methods for an automated generation have limitations in proposing challenging distractors, or they fail to effectively filter out incorrect choices that closely resemble the correct answer, share synonymous…
Descriptors: Multiple Choice Tests, Artificial Intelligence, Attention, Natural Language Processing
Peer reviewedYang Zhong; Mohamed Elaraby; Diane Litman; Ahmed Ashraf Butt; Muhsin Menekse – Grantee Submission, 2024
This paper introduces REFLECTSUMM, a novel summarization dataset specifically designed for summarizing students' reflective writing. The goal of REFLECTSUMM is to facilitate developing and evaluating novel summarization techniques tailored to real-world scenarios with little training data, with potential implications in the opinion summarization…
Descriptors: Documentation, Writing (Composition), Reflection, Metadata
Andreea Dutulescu; Stefan Ruseti; Denis Iorga; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2025
Automated multiple-choice question (MCQ) generation is valuable for scalable assessment and enhanced learning experiences. How-ever, existing MCQ generation methods face challenges in ensuring plausible distractors and maintaining answer consistency. This paper intro-duces a method for MCQ generation that integrates reasoning-based explanations…
Descriptors: Automation, Computer Assisted Testing, Multiple Choice Tests, Natural Language Processing
Laura K. Allen; Sarah C. Creer; Püren Öncel – Grantee Submission, 2022
As educators turn to technology to supplement classroom instruction, the integration of natural language processing (NLP) into educational technologies is vital for increasing student success. NLP involves the use of computers to analyze and respond to human language, including students' responses to a variety of assignments and tasks. While NLP…
Descriptors: Natural Language Processing, Learning Analytics, Learning Processes, Methods
Wen-Chiang Ivan Lim; Neil T. Heffernan III; Ivan Eroshenko; Wai Khumwang; Pei-Chen Chan – Grantee Submission, 2025
Intelligent tutoring systems are increasingly used in schools, providing teachers with valuable analytics on student learning. However, many teachers lack the time to review these reports in detail due to heavy workloads, and some face challenges with data literacy. This project investigates the use of large language models (LLMs) to generate…
Descriptors: Intelligent Tutoring Systems, Natural Language Processing, Assignments, Learning Management Systems
Stefan Ruseti; Ionut Paraschiv; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Automated Essay Scoring (AES) is a well-studied problem in Natural Language Processing applied in education. Solutions vary from handcrafted linguistic features to large Transformer-based models, implying a significant effort in feature extraction and model implementation. We introduce a novel Automated Machine Learning (AutoML) pipeline…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essays
Ishrat Ahmed; Wenxing Liu; Rod D. Roscoe; Elizabeth Reilley; Danielle S. McNamara – Grantee Submission, 2025
Large language models (LLMs) are increasingly being utilized to develop tools and services in various domains, including education. However, due to the nature of the training data, these models are susceptible to inherent social or cognitive biases, which can influence their outputs. Furthermore, their handling of critical topics, such as privacy…
Descriptors: Artificial Intelligence, Natural Language Processing, Computer Mediated Communication, College Students
Olney, Andrew M. – Grantee Submission, 2022
Multi-angle question answering models have recently been proposed that promise to perform related tasks like question generation. However, performance on related tasks has not been thoroughly studied. We investigate a leading model called Macaw on the task of multiple choice question generation and evaluate its performance on three angles that…
Descriptors: Test Construction, Multiple Choice Tests, Test Items, Models
Matthew T. McCrudden; Linh Huynh; Bailing Lyu; Jonna M. Kulikowich; Danielle S. McNamara – Grantee Submission, 2024
Readers build a mental representation of text during reading. The coherence building processes readers use to build a mental representation during reading is key to comprehension. We examined the effects of self- explanation on coherence building processes as undergraduates (n =51) read five complementary texts about natural selection and…
Descriptors: Reading Processes, Reading Comprehension, Undergraduate Students, Evolution
Linh Huynh; Danielle S. McNamara – Grantee Submission, 2025
Four versions of science and history texts were tailored to diverse hypothetical reader profiles (high and low reading skills and domain knowledge), generated by four Large Language Models (i.e., Claude, Llama, ChatGPT, and Gemini). The Natural Language Processing (NLP) technique was applied to examine variations in Large Language Model (LLM) text…
Descriptors: Artificial Intelligence, Natural Language Processing, Textbook Evaluation, Individualized Instruction
Ying Fang; Rod D. Roscoe; Danielle S. McNamara – Grantee Submission, 2023
Artificial Intelligence (AI) based assessments are commonly used in a variety of settings including business, healthcare, policing, manufacturing, and education. In education, AI-based assessments undergird intelligent tutoring systems as well as many tools used to evaluate students and, in turn, guide learning and instruction. This chapter…
Descriptors: Artificial Intelligence, Computer Assisted Testing, Student Evaluation, Evaluation Methods
Dragos-Georgian Corlatescu; Micah Watanabe; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Modeling reading comprehension processes is a critical task for Learning Analytics, as accurate models of the reading process can be used to match students to texts, identify appropriate interventions, and predict learning outcomes. This paper introduces an improved version of the Automated Model of Comprehension, namely version 4.0. AMoC has its…
Descriptors: Computer Software, Artificial Intelligence, Learning Analytics, Natural Language Processing
Laura K. Allen; Arthur C. Grasser; Danielle S. McNamara – Grantee Submission, 2023
Assessments of natural language can provide vast information about individuals' thoughts and cognitive process, but they often rely on time-intensive human scoring, deterring researchers from collecting these sources of data. Natural language processing (NLP) gives researchers the opportunity to implement automated textual analyses across a…
Descriptors: Psychological Studies, Natural Language Processing, Automation, Research Methodology

Direct link
