Score: 1

An Empirical Study on Preference Tuning Generalization and Diversity Under Domain Shift

Published: January 9, 2026 | arXiv ID: 2601.05882v1

By: Constantinos Karouzos, Xingwei Tan, Nikolaos Aletras

Potential Business Impact:

Makes AI helpful even with new tasks.

Business Areas:
Semantic Search Internet Services

Preference tuning aligns pretrained language models to human judgments of quality, helpfulness, or safety by optimizing over explicit preference signals rather than likelihood alone. Prior work has shown that preference-tuning degrades performance and reduces helpfulness when evaluated outside the training domain. However, the extent to which adaptation strategies mitigate this domain shift remains unexplored. We address this challenge by conducting a comprehensive and systematic study of alignment generalization under domain shift. We compare five popular alignment objectives and various adaptation strategies from source to target, including target-domain supervised fine-tuning and pseudo-labeling, across summarization and question-answering helpfulness tasks. Our findings reveal systematic differences in generalization across alignment objectives under domain shift. We show that adaptation strategies based on pseudo-labeling can substantially reduce domain-shift degradation

Country of Origin
🇬🇧 United Kingdom

Repos / Data Links

Page Count
16 pages

Category
Computer Science:
Computation and Language