Text as a Universal Interface for Transferable Personalization
By: Yuting Liu , Jian Guan , Jia-Nan Li and more
Potential Business Impact:
AI understands what you like using your words.
We study the problem of personalization in large language models (LLMs). Prior work predominantly represents user preferences as implicit, model-specific vectors or parameters, yielding opaque ``black-box'' profiles that are difficult to interpret and transfer across models and tasks. In contrast, we advocate natural language as a universal, model- and task-agnostic interface for preference representation. The formulation leads to interpretable and reusable preference descriptions, while naturally supporting continual evolution as new interactions are observed. To learn such representations, we introduce a two-stage training framework that combines supervised fine-tuning on high-quality synthesized data with reinforcement learning to optimize long-term utility and cross-task transferability. Based on this framework, we develop AlignXplore+, a universal preference reasoning model that generates textual preference summaries. Experiments on nine benchmarks show that our 8B model achieves state-of-the-art performanc -- outperforming substantially larger open-source models -- while exhibiting strong transferability across tasks, model families, and interaction formats.
Similar Papers
From 1,000,000 Users to Every User: Scaling Up Personalized Preference for User-level Alignment
Computation and Language
Teaches AI to understand what *you* want.
A Survey on Personalized Alignment -- The Missing Piece for Large Language Models in Real-World Applications
Computation and Language
Teaches AI to be helpful and kind, your way.
Towards Proactive Personalization through Profile Customization for Individual Users in Dialogues
Computation and Language
Teaches computers to learn what you like over time.