Score: 1

Personalized LLM Decoding via Contrasting Personal Preference

Published: June 13, 2025 | arXiv ID: 2506.12109v1

By: Hyungjune Bu , Chanjoo Jung , Minjae Kang and more

Potential Business Impact:

Makes AI understand what you like best.

Business Areas:
Personalization Commerce and Shopping

As large language models (LLMs) are progressively deployed in various real-world applications, personalization of LLMs has become increasingly important. While various approaches to LLM personalization such as prompt-based and training-based methods have been actively explored, the development of effective decoding-time algorithms remains largely overlooked, despite their demonstrated potential. In this paper, we propose CoPe (Contrasting Personal Preference), a novel decoding-time approach applied after performing parameter-efficient fine-tuning (PEFT) on user-specific data. Our core idea is to leverage reward-guided decoding specifically for personalization by maximizing each user's implicit reward signal. We evaluate CoPe across five open-ended personalized text generation tasks. Our empirical results demonstrate that CoPe achieves strong performance, improving personalization by an average of 10.57% in ROUGE-L, without relying on external reward models or additional training procedures.

Country of Origin
🇰🇷 Korea, Republic of

Repos / Data Links

Page Count
18 pages

Category
Computer Science:
Computation and Language