FedPromo: Federated Lightweight Proxy Models at the Edge Bring New Domains to Foundation Models
By: Matteo Caligiuri , Francesco Barbato , Donald Shenaj and more
Potential Business Impact:
Makes big AI models work on small devices.
Federated Learning (FL) is an established paradigm for training deep learning models on decentralized data. However, as the size of the models grows, conventional FL approaches often require significant computational resources on client devices, which may not be feasible. We introduce FedPromo, a novel framework that enables efficient adaptation of large-scale foundation models stored on a central server to new domains encountered only by remote clients. Instead of directly training the large model on client devices, FedPromo optimizes lightweight proxy models via FL, significantly reducing computational overhead while maintaining privacy. Our method follows a two-stage process: first, server-side knowledge distillation aligns the representations of a large-scale foundation model (e.g., a transformer) with those of a compact counterpart (e.g., a CNN). Then, the compact model encoder is deployed to client devices, where trainable classifiers are learned locally. These classifiers are subsequently aggregated and seamlessly transferred back to the foundation model, facilitating personalized adaptation without requiring direct access to user data. Through novel regularization strategies, our framework enables decentralized multi-domain learning, balancing performance, privacy, and resource efficiency. Extensive experiments on five image classification benchmarks demonstrate that FedPromo outperforms existing methods while assuming limited-resource clients.
Similar Papers
Beyond Aggregation: Guiding Clients in Heterogeneous Federated Learning
Machine Learning (CS)
Directs patients to the best hospital for them.
DP2FL: Dual Prompt Personalized Federated Learning in Foundation Models
Distributed, Parallel, and Cluster Computing
Helps AI learn from new, private data faster.
Enhancing Communication Efficiency in FL with Adaptive Gradient Quantization and Communication Frequency Optimization
Distributed, Parallel, and Cluster Computing
Makes phones train AI without sharing private info.