Score: 2

Beyond Static LLM Policies: Imitation-Enhanced Reinforcement Learning for Recommendation

Published: October 15, 2025 | arXiv ID: 2510.13229v1

By: Yi Zhang , Lili Xie , Ruihong Qiu and more

Potential Business Impact:

Makes movie suggestions faster and smarter.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Recommender systems (RecSys) have become critical tools for enhancing user engagement by delivering personalized content across diverse digital platforms. Recent advancements in large language models (LLMs) demonstrate significant potential for improving RecSys, primarily due to their exceptional generalization capabilities and sophisticated contextual understanding, which facilitate the generation of flexible and interpretable recommendations. However, the direct deployment of LLMs as primary recommendation policies presents notable challenges, including persistent latency issues stemming from frequent API calls and inherent model limitations such as hallucinations and biases. To address these issues, this paper proposes a novel offline reinforcement learning (RL) framework that leverages imitation learning from LLM-generated trajectories. Specifically, inverse reinforcement learning is employed to extract robust reward models from LLM demonstrations. This approach negates the need for LLM fine-tuning, thereby substantially reducing computational overhead. Simultaneously, the RL policy is guided by the cumulative rewards derived from these demonstrations, effectively transferring the semantic insights captured by the LLM. Comprehensive experiments conducted on two benchmark datasets validate the effectiveness of the proposed method, demonstrating superior performance when compared against state-of-the-art RL-based and in-context learning baselines. The code can be found at https://github.com/ArronDZhang/IL-Rec.

Country of Origin
🇦🇺 Australia

Repos / Data Links

Page Count
10 pages

Category
Computer Science:
Information Retrieval