NotesFAQContact Us
Collection
Advanced
Search Tips
Back to results
Peer reviewed Peer reviewed
Direct linkDirect link
ERIC Number: ED659592
Record Type: Non-Journal
Publication Date: 2023-Sep-30
Pages: N/A
Abstractor: As Provided
ISBN: N/A
ISSN: N/A
EISSN: N/A
Available Date: N/A
Let's Chat: Chatbot Nudging for Improved Course Performance
Katharine Meyer; Lindsay Page; Catherine Mata Hidalgo; Candice Fifield; Brandon Walsh; Eric Smith; Michael Evans
Society for Research on Educational Effectiveness
Background: Students face many challenges completing a postsecondary education program, including both administrative hurdles (e.g., course registration) and course management challenges (e.g., scheduling study time, finding academic support). Text-based nudging in the postsecondary context has been a particularly effective tool when targeting administrative tasks (Castleman & Page, 2015, 2016, 2017; Castleman, Page & Schooley, 2014; Page, Castleman & Meyer, 2019; Page, Meyer, Lee, & Gehlbach, 2022) or encouraging use of specific academic services (Pugatch & Wilson, 2018). Personalized outreach from a trusted source, such as one's school or a peer, has been more effective at moving student behavior (Avery et al, 2020; Debnam, 2017). However, in other settings outreach around college academic performance and encouraging specific study skills has not yielded significant effects (Oreopoulos & Petronijevic, 2019). Research Question and Motivation: This study examines whether text-based nudging can support students' performance in an introductory college course. Course performance matters for college success. Both marginal (e.g., passing the class) and inframarginal (earning a B vs. a C) course performance can affect students' choice of major, progression to upper-level courses, eligibility for financial aid, and college persistence. We recognize the limited reach of an intervention in one course on affecting ultimate college completion and view this work as part of a broader ecosystem of student support. Setting & Method: We conducted a randomized controlled trial of a course chatbot designed to (1) provide timely reminders of assignments; (2) provide customized feedback on students' progress; (3) connect students to academic supports; and (4) serve as an additional channel of communication between students and their instructors. We pre-registered the study with the Registry of Efficacy and Effectiveness Studies (REES). We randomized 1,568 students enrolled in an asynchronous, virtual introductory government class to the treatment or control condition across three semesters. We targeted this course given its high historical failure/withdrawal rate and given it was a large-enrollment course where students might struggle to connect with the instructional team. All students received standard course communications from the professor (primarily via email). Treatment students additionally received 2-3 weekly text messages from the chatbot providing information and inviting interactions. The weekly messaging schedule included a digest of upcoming assignments, customized information about whether students had overdue assignments, and encouragement from the course TA. Around each of the asynchronous course exams, students received reminders about a "quiz me" bot function to try sample exam questions, reminders about when the exam opened, and nudges to start the exam if students had not done so by the exam closing date. Students could text the bot at any time and receive automatic replies from the bot knowledge base of answers or, when there was not a clear bot reply available, a notice that the course teaching assistant would address their question shortly. Population: We implemented the intervention at Georgia State University in Atlanta, GA. As highlighted in Table 1, about 43% of students were Black and 15% were Hispanic, about a quarter of students were first-generation students. The share of freshmen each term ranged from 43% to 78% of the course. We hypothesized the intervention may be especially useful for first-generation students who may not have mentors available to provide guidance on effective college course management (Jack, 2016; Lareau, 2003; Walton & Cohen, 2007) and for students who struggled academically (either based on lower prior grades or those re-taking the course after an unsatisfactory grade their first attempt). We viewed as an open question whether the chatbot might differentially benefit freshmen (who are new to college course management) or upperclassmen (particularly the type of upperclassmen enrolling in a course typically completed during freshman year). Research Design: To estimate the treatment effect, we use a regression model of the following general form: Y[subscript ir] = [alpha subscript r] + [beta]T[subscript i] + X[gamma] + [epsilon subscript ir]. Where Y[subscript ir] represents the outcome for study participant i, T[subscript i] is the indicator for assignment to treatment and is equal to one if the study participant i is randomized to the academic chatbot group, X represents a vector of individual-level baseline characteristics (included primarily to explain residual variation in outcomes and to improve precision of estimation as a result), and [epsilon subscript ir] is a random error term. We include implementation semester fixed effects ([alpha subscript r]). We modify this analytic model to include an interaction between treatment and student characteristics for subgroup analyses. Findings: The academic chatbot significantly shifted students' final grades, increasing the likelihood students received a B or higher in the course by five percentage points (Table 2). We find the chatbot affected grades at different points in the distribution for different groups -- for example, increasing the likelihood that freshmen earned an A while for students re-taking the course, the treatment increased the likelihood of passing the course. On end-of-course surveys, students reported enthusiasm for the chatbot, with 92 percent of respondents recommending its continued use in the course and expansion to other courses. About half of treated students wrote into the chatbot, with 22% of all treated students using the "quiz me" tool. In the main paper, we supplement our impact analysis with insights about efforts to scale the chatbot across classes, sharing data on faculty and staff time required during the development, evaluation, and scale phases. We further leverage open-ended responses on end-of-course surveys to provide a portrait of students' experiences during the intervention and to provide insights into the mechanisms through which the intervention affected final grades. Conclusions and Implications: This intervention builds on an ongoing research-practice partnership with Georgia State University, and these insights have informed adaptation of the chatbot for an introductory economics and introductory chemistry course. Our analysis highlights the efficacy of a low-cost tool to supplement student learning and help students develop course management skills. This work further provides an analytic framework for studying interventions implemented in a shifting context. We offer recommendations on how to systematically document variations in sample, intervention context, and implementation to provide a comprehensive picture of whether, for whom, and under which circumstances an intervention is effective.
Society for Research on Educational Effectiveness. 2040 Sheridan Road, Evanston, IL 60208. Tel: 202-495-0920; e-mail: contact@sree.org; Web site: https://www.sree.org/
Publication Type: Reports - Research
Education Level: Higher Education; Postsecondary Education
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: Society for Research on Educational Effectiveness (SREE)
Identifiers - Location: Georgia (Atlanta)
Grant or Contract Numbers: N/A
Author Affiliations: N/A