Evaluating Position Bias in Large Language Model Recommendations
By: Ethan Bito, Yongli Ren, Estrid He
Potential Business Impact:
Fixes computer suggestions so order doesn't matter.
Large Language Models (LLMs) are being increasingly explored as general-purpose tools for recommendation tasks, enabling zero-shot and instruction-following capabilities without the need for task-specific training. While the research community is enthusiastically embracing LLMs, there are important caveats to directly adapting them for recommendation tasks. In this paper, we show that LLM-based recommendation models suffer from position bias, where the order of candidate items in a prompt can disproportionately influence the recommendations produced by LLMs. First, we analyse the position bias of LLM-based recommendations on real-world datasets, where results uncover systemic biases of LLMs with high sensitivity to input orders. Furthermore, we introduce a new prompting strategy to mitigate the position bias of LLM recommendation models called Ranking via Iterative SElection (RISE). We compare our proposed method against various baselines on key benchmark datasets. Experiment results show that our method reduces sensitivity to input ordering and improves stability without requiring model fine-tuning or post-processing.
Similar Papers
Revealing Potential Biases in LLM-Based Recommender Systems in the Cold Start Setting
Information Retrieval
Finds unfairness in computer suggestions.
Revealing Potential Biases in LLM-Based Recommender Systems in the Cold Start Setting
Information Retrieval
Finds unfairness in computer suggestions.
Evaluating LLM-Based Mobile App Recommendations: An Empirical Study
Information Retrieval
Shows how smart computer programs pick apps.