BiFair: A Fairness-aware Training Framework for LLM-enhanced Recommender Systems via Bi-level Optimization
By: Jiaming Zhang , Yuyuan Li , Yiqun Xu and more
Potential Business Impact:
Makes online suggestions fairer for everyone.
Large Language Model-enhanced Recommender Systems (LLM-enhanced RSs) have emerged as a powerful approach to improving recommendation quality by leveraging LLMs to generate item representations. Despite these advancements, the integration of LLMs raises severe fairness concerns. Existing studies reveal that LLM-based RSs exhibit greater unfairness than traditional RSs, yet fairness issues in LLM-enhanced RSs remain largely unexplored. In this paper, our empirical study reveals that while LLM-enhanced RSs improve fairness across item groups, a significant fairness gap persists. Further enhancement remains challenging due to the architectural differences and varying sources of unfairness inherent in LLM-enhanced RSs. To bridge this gap, we first decompose unfairness into i) \textit{prior unfairness} in LLM-generated representations and ii) \textit{training unfairness} in recommendation models. Then, we propose BiFair, a bi-level optimization-based fairness-aware training framework designed to mitigate both prior and training unfairness simultaneously. BiFair optimizes two sets of learnable parameters: LLM-generated representations and a trainable projector in the recommendation model, using a two-level nested optimization process. Additionally, we introduce an adaptive inter-group balancing mechanism, leveraging multi-objective optimization principles to dynamically balance fairness across item groups. Extensive experiments on three real-world datasets demonstrate that BiFair significantly mitigates unfairness and outperforms previous state-of-the-art methods.
Similar Papers
Investigating and Mitigating Stereotype-aware Unfairness in LLM-based Recommendations
Information Retrieval
Fixes AI recommendations to be fair to everyone.
Federated Latent Factor Model for Bias-Aware Recommendation with Privacy-Preserving
Machine Learning (CS)
Keeps your private data safe while recommending things.
Revealing Potential Biases in LLM-Based Recommender Systems in the Cold Start Setting
Information Retrieval
Finds unfairness in computer suggestions.