ERIC Number: ED673018
Record Type: Non-Journal
Publication Date: 2025-Apr-29
Pages: 22
Abstractor: As Provided
ISBN: N/A
ISSN: N/A
EISSN: N/A
Available Date: 0000-00-00
GenAI-Powered Text Personalization: Natural Language Processing Validation of Adaptation Capabilities
Linh Huynh1; Danielle S. McNamara1
Grantee Submission
We conducted two experiments to assess the alignment between Generative AI (GenAI) text personalization and hypothetical readers' profiles. In Experiment 1, four LLMs (i.e., Claude 3.5 Sonnet; Llama; Gemini Pro 1.5; ChatGPT 4) were prompted to tailor 10 science texts (i.e., biology, chemistry, physics) to accommodate four different profiles varying in knowledge, reading skills, and learning goals. Natural Language Processing (NLP) was leveraged to evaluate the GenAI adapted texts using an array of linguistic and semantic features empirically associated with text readability. NLP analyses revealed variations in the degree to which the LLMs successfully adjusted linguistic features to suit reader profiles. Most notably, NLP highlighted inconsistent alignment between potential reader abilities and text complexity. The results pointed toward the need to augment the AI prompts using personification, chain-of-thought, and documents regarding text comprehension, text readability and individual differences (i.e., leveraging RAG). The resulting text modifications in Experiment 2 were better aligned with readers' profiles. Augmented prompts resulted in LLM modifications with more appropriate cohesion features tailored to high and low knowledge readers for optimal comprehension. This study demonstrates how LLMs can be prompted to modify text and uniquely demonstrates the application of NLP to evaluate theory-driven content personalization using GenAI. NLP offers an efficient, real-time solution to validate personalized content across multiple domains and contexts. [Note: This content is a pre-print version of the article.]
Descriptors: Natural Language Processing, Profiles, Individual Differences, Semantics, Artificial Intelligence, Readability, Reading Skills, Science Instruction, Computer Software, Textbooks, Correlation, Reading Comprehension, Cues, Connected Discourse, Computational Linguistics, Instructional Material Evaluation
Publication Type: Reports - Research
Education Level: N/A
Audience: N/A
Language: English
Authoring Institution: N/A
IES Funded: Yes
Grant or Contract Numbers: R305T240035
Department of Education Funded: Yes
Author Affiliations: 1Learning Engineering Institute, Arizona State University, Tempe, AZ, USA