ALIGN-FL: Architecture-independent Learning through Invariant Generative component sharing in Federated Learning
By: Mayank Gulati, Benedikt Groß, Gerhard Wunder
We present ALIGN-FL, a novel approach to distributed learning that addresses the challenge of learning from highly disjoint data distributions through selective sharing of generative components. Instead of exchanging full model parameters, our framework enables privacy-preserving learning by transferring only generative capabilities across clients, while the server performs global training using synthetic samples. Through complementary privacy mechanisms: DP-SGD with adaptive clipping and Lipschitz regularized VAE decoders and a stateful architecture supporting heterogeneous clients, we experimentally validate our approach on MNIST and Fashion-MNIST datasets with cross-domain outliers. Our analysis demonstrates that both privacy mechanisms effectively map sensitive outliers to typical data points while maintaining utility in extreme Non-IID scenarios typical of cross-silo collaborations. Index Terms: Client-invariant Learning, Federated Learning (FL), Privacy-preserving Generative Models, Non-Independent and Identically Distributed (Non-IID), Heterogeneous Architectures
Similar Papers
Federated Learning with Feedback Alignment
Machine Learning (CS)
Helps computers learn together without sharing private data.
AIGC-assisted Federated Learning for Edge Intelligence: Architecture Design, Research Challenges and Future Directions
Machine Learning (CS)
Makes AI learn better from different data.
Experiences Building Enterprise-Level Privacy-Preserving Federated Learning to Power AI for Science
Distributed, Parallel, and Cluster Computing
Lets AI learn from private data safely.