PersonaDual: Balancing Personalization and Objectivity via Adaptive Reasoning
By: Xiaoyou Liu , Xinyi Mou , Shengbin Yue and more
As users increasingly expect LLMs to align with their preferences, personalized information becomes valuable. However, personalized information can be a double-edged sword: it can improve interaction but may compromise objectivity and factual correctness, especially when it is misaligned with the question. To alleviate this problem, we propose PersonaDual, a framework that supports both general-purpose objective reasoning and personalized reasoning in a single model, and adaptively switches modes based on context. PersonaDual is first trained with SFT to learn two reasoning patterns, and then further optimized via reinforcement learning with our proposed DualGRPO to improve mode selection. Experiments on objective and personalized benchmarks show that PersonaDual preserves the benefits of personalization while reducing interference, achieving near interference-free performance and better leveraging helpful personalized signals to improve objective problem-solving.
Similar Papers
The Personalization Paradox: Semantic Loss vs. Reasoning Gains in Agentic AI Q&A
Information Retrieval
Makes AI tutors give better, personalized advice.
Persona-Aware Alignment Framework for Personalized Dialogue Generation
Computation and Language
Makes chatbots talk like specific people.
PARAN: Persona-Augmented Review ANswering system on Food Delivery Review Dataset
Computation and Language
Makes online reviews sound like they're written by real people.