NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 6 results Save | Export
Peer reviewed Peer reviewed
Arun-Balajiee Lekshmi-Narayanan; Priti Oli; Jeevan Chapagain; Mohammad Hassany; Rabin Banjade; Vasile Rus – Grantee Submission, 2024
Worked examples, which present an explained code for solving typical programming problems are among the most popular types of learning content in programming classes. Most approaches and tools for presenting these examples to students are based on line-by-line explanations of the example code. However, instructors rarely have time to provide…
Descriptors: Coding, Computer Science Education, Computational Linguistics, Artificial Intelligence
Reese Butterfuss; Kathryn S. McCarthy; Ellen Orcutt; Panayiota Kendeou; Danielle S. McNamara – Grantee Submission, 2023
Readers often struggle to identify the main ideas in expository texts. Existing research and instruction provide some guidance on how to encourage readers to identify main ideas. However, there is substantial variability in how main ideas are operationalized and how readers are prompted to identify main ideas. This variability hinders…
Descriptors: Reading Processes, Reading Comprehension, Reading Instruction, Best Practices
Peer reviewed Peer reviewed
Direct linkDirect link
Kole A. Norberg; Husni Almoubayyed; Logan De Ley; April Murphy; Kyle Weldon; Steve Ritter – Grantee Submission, 2024
Large language models (LLMs) offer an opportunity to make large-scale changes to educational content that would otherwise be too costly to implement. The work here highlights how LLMs (in particular GPT-4) can be prompted to revise educational math content ready for large scale deployment in real-world learning environments. We tested the ability…
Descriptors: Artificial Intelligence, Computer Software, Computational Linguistics, Educational Change
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Balyan, Renu; McCarthy, Kathryn S.; McNamara, Danielle S. – Grantee Submission, 2018
While hierarchical machine learning approaches have been used to classify texts into different content areas, this approach has, to our knowledge, not been used in the automated assessment of text difficulty. This study compared the accuracy of four classification machine learning approaches (flat, one-vs-one, one-vs-all, and hierarchical) using…
Descriptors: Artificial Intelligence, Classification, Comparative Analysis, Prediction
Wang, Zuowei; O'Reilly, Tenaha; Sabatini, John; McCarthy, Kathryn S.; McNamara, Danielle S. – Grantee Submission, 2021
We compared high school students' performance in a traditional comprehension assessment requiring them to identify key information and draw inferences from single texts, and a scenario-based assessment (SBA) requiring them to integrate, evaluate and apply information across multiple sources. Both assessments focused on a non-academic topic.…
Descriptors: Comparative Analysis, High School Students, Inferences, Reading Tests
Neuman, Susan B.; Wong, Kevin M.; Kaefer, Tanya – Grantee Submission, 2017
The purpose of this study was to investigate the influence of digital and non-digital storybooks on low-income preschoolers' oral language comprehension. Employing a within-subject design on 38 four-year-olds from a Head Start program, we compared the effect of medium on preschoolers' target words and comprehension of stories. Four digital…
Descriptors: Oral Language, Story Reading, Low Income Groups, Disadvantaged Youth