Publication Date
| In 2026 | 0 |
| Since 2025 | 1 |
| Since 2022 (last 5 years) | 1 |
| Since 2017 (last 10 years) | 1 |
| Since 2007 (last 20 years) | 1 |
Descriptor
| Artificial Intelligence | 1 |
| Automation | 1 |
| College Students | 1 |
| Computer Mediated… | 1 |
| Computer Uses in Education | 1 |
| Ethics | 1 |
| Evaluation Methods | 1 |
| Natural Language Processing | 1 |
| Privacy | 1 |
| Safety | 1 |
| Social Bias | 1 |
| More ▼ | |
Source
| Grantee Submission | 1 |
Author
| Danielle S. McNamara | 1 |
| Elizabeth Reilley | 1 |
| Ishrat Ahmed | 1 |
| Rod D. Roscoe | 1 |
| Wenxing Liu | 1 |
Publication Type
| Journal Articles | 1 |
| Reports - Research | 1 |
Education Level
| Higher Education | 1 |
| Postsecondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Ishrat Ahmed; Wenxing Liu; Rod D. Roscoe; Elizabeth Reilley; Danielle S. McNamara – Grantee Submission, 2025
Large language models (LLMs) are increasingly being utilized to develop tools and services in various domains, including education. However, due to the nature of the training data, these models are susceptible to inherent social or cognitive biases, which can influence their outputs. Furthermore, their handling of critical topics, such as privacy…
Descriptors: Artificial Intelligence, Natural Language Processing, Computer Mediated Communication, College Students

Peer reviewed
Direct link
