Score: 0

Bias Mitigation for AI-Feedback Loops in Recommender Systems: A Systematic Literature Review and Taxonomy

Published: August 28, 2025 | arXiv ID: 2509.00109v1

By: Theodor Stoecker, Samed Bayer, Ingo Weber

Potential Business Impact:

Fixes AI that gets unfair by learning too much.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Recommender systems continually retrain on user reactions to their own predictions, creating AI feedback loops that amplify biases and diminish fairness over time. Despite this well-known risk, most bias mitigation techniques are tested only on static splits, so their long-term fairness across multiple retraining rounds remains unclear. We therefore present a systematic literature review of bias mitigation methods that explicitly consider AI feedback loops and are validated in multi-round simulations or live A/B tests. Screening 347 papers yields 24 primary studies published between 2019-2025. Each study is coded on six dimensions: mitigation technique, biases addressed, dynamic testing set-up, evaluation focus, application domain, and ML task, organising them into a reusable taxonomy. The taxonomy offers industry practitioners a quick checklist for selecting robust methods and gives researchers a clear roadmap to the field's most urgent gaps. Examples include the shortage of shared simulators, varying evaluation metrics, and the fact that most studies report either fairness or performance; only six use both.

Country of Origin
🇩🇪 Germany

Page Count
7 pages

Category
Computer Science:
Information Retrieval