LLM as Explainable Re-Ranker for Recommendation System
By: Yaqi Wang, Haojia Sun, Shuting Zhang
Potential Business Impact:
Helps online stores show you better, clearer choices.
The application of large language models (LLMs) in recommendation systems has recently gained traction. Traditional recommendation systems often lack explainability and suffer from issues such as popularity bias. Previous research has also indicated that LLMs, when used as standalone predictors, fail to achieve accuracy comparable to traditional models. To address these challenges, we propose to use LLM as an explainable re-ranker, a hybrid approach that combines traditional recommendation models with LLMs to enhance both accuracy and interpretability. We constructed a dataset to train the re-ranker LLM and evaluated the alignment between the generated dataset and human expectations. Leveraging a two-stage training process, our model significantly improved NDCG, a key ranking metric. Moreover, the re-ranker outperformed a zero-shot baseline in ranking accuracy and interpretability. These results highlight the potential of integrating traditional recommendation models with LLMs to address limitations in existing systems and pave the way for more explainable and fair recommendation frameworks.
Similar Papers
How Reliable are LLMs for Reasoning on the Re-ranking task?
Computation and Language
Shows how computers learn to explain their choices.
Evaluating LLM-Based Mobile App Recommendations: An Empirical Study
Information Retrieval
Shows how smart computer programs pick apps.
End-to-End Personalization: Unifying Recommender Systems with Large Language Models
Information Retrieval
Suggests movies you'll love, explains why.