Score: 1

Text as a Universal Interface for Transferable Personalization

Published: January 8, 2026 | arXiv ID: 2601.04963v1

By: Yuting Liu , Jian Guan , Jia-Nan Li and more

Potential Business Impact:

AI understands what you like using your words.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

We study the problem of personalization in large language models (LLMs). Prior work predominantly represents user preferences as implicit, model-specific vectors or parameters, yielding opaque ``black-box'' profiles that are difficult to interpret and transfer across models and tasks. In contrast, we advocate natural language as a universal, model- and task-agnostic interface for preference representation. The formulation leads to interpretable and reusable preference descriptions, while naturally supporting continual evolution as new interactions are observed. To learn such representations, we introduce a two-stage training framework that combines supervised fine-tuning on high-quality synthesized data with reinforcement learning to optimize long-term utility and cross-task transferability. Based on this framework, we develop AlignXplore+, a universal preference reasoning model that generates textual preference summaries. Experiments on nine benchmarks show that our 8B model achieves state-of-the-art performanc -- outperforming substantially larger open-source models -- while exhibiting strong transferability across tasks, model families, and interaction formats.

Country of Origin
🇨🇳 China


Page Count
38 pages

Category
Computer Science:
Computation and Language