Towards Proactive Personalization through Profile Customization for Individual Users in Dialogues
By: Xiaotian Zhang , Yuan Wang , Ruizhe Chen and more
Potential Business Impact:
Teaches computers to learn what you like over time.
The deployment of Large Language Models (LLMs) in interactive systems necessitates a deep alignment with the nuanced and dynamic preferences of individual users. Current alignment techniques predominantly address universal human values or static, single-turn preferences, thereby failing to address the critical needs of long-term personalization and the initial user cold-start problem. To bridge this gap, we propose PersonalAgent, a novel user-centric lifelong agent designed to continuously infer and adapt to user preferences. PersonalAgent constructs and dynamically refines a unified user profile by decomposing dialogues into single-turn interactions, framing preference inference as a sequential decision-making task. Experiments show that PersonalAgent achieves superior performance over strong prompt-based and policy optimization baselines, not only in idealized but also in noisy conversational contexts, while preserving cross-session preference consistency. Furthermore, human evaluation confirms that PersonalAgent excels at capturing user preferences naturally and coherently. Our findings underscore the importance of lifelong personalization for developing more inclusive and adaptive conversational agents. Our code is available here.
Similar Papers
Enabling Personalized Long-term Interactions in LLM-based Agents through Persistent Memory and User Profiles
Artificial Intelligence
AI remembers you for better conversations.
PersonaAgent: When Large Language Model Agents Meet Personalization at Test Time
Artificial Intelligence
Makes AI understand and act like you.
Adaptive Multi-Agent Response Refinement in Conversational Systems
Computation and Language
Makes chatbots smarter by checking facts and you.