Learning to Rank with Top-$K$ Fairness
By: Boyang Zhang , Quanqi Hu , Mingxuan Sun and more
Potential Business Impact:
Makes search results fairer for everyone.
Fairness in ranking models is crucial, as disparities in exposure can disproportionately affect protected groups. Most fairness-aware ranking systems focus on ensuring comparable average exposure for groups across the entire ranked list, which may not fully address real-world concerns. For example, when a ranking model is used for allocating resources among candidates or disaster hotspots, decision-makers often prioritize only the top-$K$ ranked items, while the ranking beyond top-$K$ becomes less relevant. In this paper, we propose a list-wise learning-to-rank framework that addresses the issues of inequalities in top-$K$ rankings at training time. Specifically, we propose a top-$K$ exposure disparity measure that extends the classic exposure disparity metric in a ranked list. We then learn a ranker to balance relevance and fairness in top-$K$ rankings. Since direct top-$K$ selection is computationally expensive for a large number of items, we transform the non-differentiable selection process into a differentiable objective function and develop efficient stochastic optimization algorithms to achieve both high accuracy and sufficient fairness. Extensive experiments demonstrate that our method outperforms existing methods.
Similar Papers
Improved Rank Aggregation under Fairness Constraint
Data Structures and Algorithms
Makes sure everyone gets a fair chance in rankings.
Quantifying Query Fairness Under Unawareness
Information Retrieval
Makes search results fair for everyone.
Finding a Fair Scoring Function for Top-$k$ Selection: From Hardness to Practice
Databases
Makes computer choices fair for everyone.