Score: 1

From 1,000,000 Users to Every User: Scaling Up Personalized Preference for User-level Alignment

Published: March 19, 2025 | arXiv ID: 2503.15463v3

By: Jia-Nan Li , Jian Guan , Songhao Wu and more

Potential Business Impact:

Teaches AI to understand what *you* want.

Business Areas:
Personalization Commerce and Shopping

Large language models (LLMs) have traditionally been aligned through one-size-fits-all approaches that assume uniform human preferences, fundamentally overlooking the diversity in user values and needs. This paper introduces a comprehensive framework for scalable personalized alignment of LLMs. We establish a systematic preference space characterizing psychological and behavioral dimensions, alongside diverse persona representations for robust preference inference in real-world scenarios. Building upon this foundation, we introduce \textsc{AlignX}, a large-scale dataset of over 1.3 million personalized preference examples, and develop two complementary alignment approaches: \textit{in-context alignment} directly conditioning on persona representations and \textit{preference-bridged alignment} modeling intermediate preference distributions. Extensive experiments demonstrate substantial improvements over existing methods, with an average 17.06\% accuracy gain across four benchmarks while exhibiting a strong adaptation capability to novel preferences, robustness to limited user data, and precise preference controllability. These results validate our approach toward user-adaptive AI systems.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
26 pages

Category
Computer Science:
Computation and Language