Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 1 |
| Since 2017 (last 10 years) | 1 |
| Since 2007 (last 20 years) | 1 |
Descriptor
| Artificial Intelligence | 1 |
| Computational Linguistics | 1 |
| Computer Science Education | 1 |
| Computer Software | 1 |
| Cues | 1 |
| Grade 7 | 1 |
| Grade 8 | 1 |
| Natural Language Processing | 1 |
| Programming | 1 |
| Programming Languages | 1 |
| Readability | 1 |
| More ▼ | |
Source
| Grantee Submission | 1 |
Author
| Jeevan Chapagain | 1 |
| Priti Oli | 1 |
| Rabin Banjade | 1 |
| Vasile Rus | 1 |
Publication Type
| Reports - Research | 1 |
| Speeches/Meeting Papers | 1 |
Education Level
| Elementary Education | 1 |
| Grade 7 | 1 |
| Grade 8 | 1 |
| Junior High Schools | 1 |
| Middle Schools | 1 |
| Secondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Peer reviewedPriti Oli; Rabin Banjade; Jeevan Chapagain; Vasile Rus – Grantee Submission, 2023
This paper systematically explores how Large Language Models (LLMs) generate explanations of code examples of the type used in intro-to-programming courses. As we show, the nature of code explanations generated by LLMs varies considerably based on the wording of the prompt, the target code examples being explained, the programming language, the…
Descriptors: Computational Linguistics, Programming, Computer Science Education, Programming Languages


