Score: 0

Evaluating Position Bias in Large Language Model Recommendations

Published: August 4, 2025 | arXiv ID: 2508.02020v1

By: Ethan Bito, Yongli Ren, Estrid He

Potential Business Impact:

Fixes computer suggestions so order doesn't matter.

Large Language Models (LLMs) are being increasingly explored as general-purpose tools for recommendation tasks, enabling zero-shot and instruction-following capabilities without the need for task-specific training. While the research community is enthusiastically embracing LLMs, there are important caveats to directly adapting them for recommendation tasks. In this paper, we show that LLM-based recommendation models suffer from position bias, where the order of candidate items in a prompt can disproportionately influence the recommendations produced by LLMs. First, we analyse the position bias of LLM-based recommendations on real-world datasets, where results uncover systemic biases of LLMs with high sensitivity to input orders. Furthermore, we introduce a new prompting strategy to mitigate the position bias of LLM recommendation models called Ranking via Iterative SElection (RISE). We compare our proposed method against various baselines on key benchmark datasets. Experiment results show that our method reduces sensitivity to input ordering and improves stability without requiring model fine-tuning or post-processing.

Country of Origin
🇦🇺 Australia

Page Count
5 pages

Category
Computer Science:
Information Retrieval