Score: 0

A Survey on Personalized and Pluralistic Preference Alignment in Large Language Models

Published: April 9, 2025 | arXiv ID: 2504.07070v1

By: Zhouhang Xie , Junda Wu , Yiran Shen and more

Potential Business Impact:

Makes AI understand what you like best.

Business Areas:
Personalization Commerce and Shopping

Personalized preference alignment for large language models (LLMs), the process of tailoring LLMs to individual users' preferences, is an emerging research direction spanning the area of NLP and personalization. In this survey, we present an analysis of works on personalized alignment and modeling for LLMs. We introduce a taxonomy of preference alignment techniques, including training time, inference time, and additionally, user-modeling based methods. We provide analysis and discussion on the strengths and limitations of each group of techniques and then cover evaluation, benchmarks, as well as open problems in the field.

Country of Origin
🇺🇸 United States

Page Count
16 pages

Category
Computer Science:
Computation and Language