Bidirectional Knowledge Distillation for Enhancing Sequential Recommendation with Large Language Models
By: Jiongran Wu , Jiahao Liu , Dongsheng Li and more
Potential Business Impact:
Improves movie suggestions by learning from two AI types.
Large language models (LLMs) have demonstrated exceptional performance in understanding and generating semantic patterns, making them promising candidates for sequential recommendation tasks. However, when combined with conventional recommendation models (CRMs), LLMs often face challenges related to high inference costs and static knowledge transfer methods. In this paper, we propose a novel mutual distillation framework, LLMD4Rec, that fosters dynamic and bidirectional knowledge exchange between LLM-centric and CRM-based recommendation systems. Unlike traditional unidirectional distillation methods, LLMD4Rec enables iterative optimization by alternately refining both models, enhancing the semantic understanding of CRMs and enriching LLMs with collaborative signals from user-item interactions. By leveraging sample-wise adaptive weighting and aligning output distributions, our approach eliminates the need for additional parameters while ensuring effective knowledge transfer. Extensive experiments on real-world datasets demonstrate that LLMD4Rec significantly improves recommendation accuracy across multiple benchmarks without increasing inference costs. This method provides a scalable and efficient solution for combining the strengths of both LLMs and CRMs in sequential recommendation systems.
Similar Papers
Lost in Sequence: Do Large Language Models Understand Sequential Recommendation?
Information Retrieval
Helps computers guess what you'll like next.
Collaboration of Large Language Models and Small Recommendation Models for Device-Cloud Recommendation
Information Retrieval
Suggests movies you'll love, instantly.
Enhancing LLM-based Recommendation through Semantic-Aligned Collaborative Knowledge
Information Retrieval
Helps computers recommend movies better.