Personalized LLM Decoding via Contrasting Personal Preference
By: Hyungjune Bu , Chanjoo Jung , Minjae Kang and more
Potential Business Impact:
Makes AI understand what you like best.
As large language models (LLMs) are progressively deployed in various real-world applications, personalization of LLMs has become increasingly important. While various approaches to LLM personalization such as prompt-based and training-based methods have been actively explored, the development of effective decoding-time algorithms remains largely overlooked, despite their demonstrated potential. In this paper, we propose CoPe (Contrasting Personal Preference), a novel decoding-time approach applied after performing parameter-efficient fine-tuning (PEFT) on user-specific data. Our core idea is to leverage reward-guided decoding specifically for personalization by maximizing each user's implicit reward signal. We evaluate CoPe across five open-ended personalized text generation tasks. Our empirical results demonstrate that CoPe achieves strong performance, improving personalization by an average of 10.57% in ROUGE-L, without relying on external reward models or additional training procedures.
Similar Papers
CoPL: Collaborative Preference Learning for Personalizing LLMs
Machine Learning (CS)
Teaches AI to understand what you like best.
EpiCoDe: Boosting Model Performance Beyond Training with Extrapolation and Contrastive Decoding
Computation and Language
Makes AI smarter with less training data.
CoPE: A Small Language Model for Steerable and Scalable Content Labeling
Computation and Language
Teaches computers to judge online content fairly.