NotesFAQContact Us
Collection
Advanced
Search Tips
Back to results
Peer reviewed Peer reviewed
Direct linkDirect link
ERIC Number: EJ1486314
Record Type: Journal
Publication Date: 2025-Nov
Pages: 24
Abstractor: As Provided
ISBN: N/A
ISSN: ISSN-0007-1013
EISSN: EISSN-1467-8535
Available Date: 2025-03-24
When and How Biases Seep In: Enhancing Debiasing Approaches for Fair Educational Predictive Analytics
British Journal of Educational Technology, v56 n6 p2478-2501 2025
The use of predictive analytics powered by machine learning (ML) to model educational data has increasingly been identified to exhibit bias towards marginalized populations, prompting the need for more equitable applications of these techniques. To tackle bias that emerges in training data or models at different stages of the ML modelling pipeline, numerous debiasing approaches have been proposed. Yet, research into state-of-the-art techniques for effectively employing these approaches to enhance fairness in educational predictive scenarios remains limited. Prior studies often focused on mitigating bias from a single source at a specific stage of model construction within narrowly defined scenarios, overlooking the complexities of bias originating from multiple sources across various stages. Moreover, these approaches were often evaluated using typical threshold-dependent fairness metrics, which fail to account for real-world educational scenarios where thresholds are typically unknown before evaluation. To bridge these gaps, this study systematically examined a total of 28 representative debiasing approaches, categorized by the sources of bias and the stage they targeted, for two critical educational predictive tasks, namely forum post classification and student career prediction. Both tasks involve a two-phase modelling process where features learned from upstream models in the first phase are fed into classical ML models for final predictions, which is a common yet under-explored setting for educational data modelling. The study observed that addressing local stereotypical bias, label bias or proxy discrimination in training data, as well as imposing fairness constraints on models, can effectively enhance predictive fairness. But their efficacy was often compromised when features from upstream models were inherently biased. Beyond that, this study proposes two novel strategies, namely Multi-Stage and Multi-Source debiasing to integrate existing approaches. These strategies demonstrated substantial improvements in mitigating unfairness, underscoring the importance of unified approaches capable of addressing biases from various sources across multiple stages.
Wiley. Available from: John Wiley & Sons, Inc. 111 River Street, Hoboken, NJ 07030. Tel: 800-835-6770; e-mail: cs-journals@wiley.com; Web site: https://www-wiley-com.bibliotheek.ehb.be/en-us
Publication Type: Journal Articles; Reports - Research
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Grant or Contract Numbers: N/A
Author Affiliations: 1Centre for Learning Analytics, Monash University, Melbourne, VIC, Australia; 2Penn Center for Learning Analytics, University of Pennsylvania, Philadelphia, Pennsylvania, USA; 3Department of Computer Science, College of Information Science and Technology, Jinan University, Guangzhou, Guangdong, China