NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 4 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Maria Korochkina; Kathleen Rastle – npj Science of Learning, 2025
Breaking down complex words into smaller meaningful units (e.g., "unhappy = un- + happy"), known as morphemes, is vital for skilled reading as it allows readers to rapidly compute word meanings. There is agreement that children rely on reading experience to acquire morphological knowledge in English; however, the nature of this…
Descriptors: Childrens Literature, Morphemes, Morphology (Languages), Reading Skills
Peer reviewed Peer reviewed
Direct linkDirect link
Lauren S. Baron; Anna M. Ehrhorn; Peter Shlanta; Jane Ashby; Bethany A. Bell; Suzanne M. Adlof – Reading and Writing: An Interdisciplinary Journal, 2025
Phonological processing is an important contributor to decoding and spelling difficulties, but it does not fully explain word reading outcomes for all children. As orthographic knowledge is acquired, it influences phonological processing in typical readers. In the present study, we examined whether orthography affects phonological processing…
Descriptors: Orthographic Symbols, Phonology, Language Processing, Reading Difficulties
Peer reviewed Peer reviewed
Direct linkDirect link
Linh Huynh; Danielle S. McNamara – Grantee Submission, 2025
Four versions of science and history texts were tailored to diverse hypothetical reader profiles (high and low reading skills and domain knowledge), generated by four Large Language Models (i.e., Claude, Llama, ChatGPT, and Gemini). The Natural Language Processing (NLP) technique was applied to examine variations in Large Language Model (LLM) text…
Descriptors: Artificial Intelligence, Natural Language Processing, Textbook Evaluation, Individualized Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Linh Huynh; Danielle S. McNamara – Grantee Submission, 2025
We conducted two experiments to assess the alignment between Generative AI (GenAI) text personalization and hypothetical readers' profiles. In Experiment 1, four LLMs (i.e., Claude 3.5 Sonnet; Llama; Gemini Pro 1.5; ChatGPT 4) were prompted to tailor 10 science texts (i.e., biology, chemistry, physics) to accommodate four different profiles…
Descriptors: Natural Language Processing, Profiles, Individual Differences, Semantics