Score: 1

Learn to Preserve Personality: Federated Foundation Models in Recommendations

Published: June 13, 2025 | arXiv ID: 2506.11563v1

By: Zhiwei Li , Guodong Long , Chunxu Zhang and more

Potential Business Impact:

Helps AI learn about you without sharing your secrets.

Business Areas:
Personalization Commerce and Shopping

A core learning challenge for existed Foundation Models (FM) is striking the tradeoff between generalization with personalization, which is a dilemma that has been highlighted by various parameter-efficient adaptation techniques. Federated foundation models (FFM) provide a structural means to decouple shared knowledge from individual specific adaptations via decentralized processes. Recommendation systems offer a perfect testbed for FFMs, given their reliance on rich implicit feedback reflecting unique user characteristics. This position paper discusses a novel learning paradigm where FFMs not only harness their generalization capabilities but are specifically designed to preserve the integrity of user personality, illustrated thoroughly within the recommendation contexts. We envision future personal agents, powered by personalized adaptive FMs, guiding user decisions on content. Such an architecture promises a user centric, decentralized system where individuals maintain control over their personalized agents.

Country of Origin
🇦🇺 🇭🇰 Australia, Hong Kong

Page Count
14 pages

Category
Computer Science:
Machine Learning (CS)