Minimizing Layerwise Activation Norm Improves Generalization in Federated Learning
By: M Yashwanth , Gaurav Kumar Nayak , Harsh Rangwani and more
Federated Learning (FL) is an emerging machine learning framework that enables multiple clients (coordinated by a server) to collaboratively train a global model by aggregating the locally trained models without sharing any client's training data. It has been observed in recent works that learning in a federated manner may lead the aggregated global model to converge to a 'sharp minimum' thereby adversely affecting the generalizability of this FL-trained model. Therefore, in this work, we aim to improve the generalization performance of models trained in a federated setup by introducing a 'flatness' constrained FL optimization problem. This flatness constraint is imposed on the top eigenvalue of the Hessian computed from the training loss. As each client trains a model on its local data, we further re-formulate this complex problem utilizing the client loss functions and propose a new computationally efficient regularization technique, dubbed 'MAN,' which Minimizes Activation's Norm of each layer on client-side models. We also theoretically show that minimizing the activation norm reduces the top eigenvalue of the layer-wise Hessian of the client's loss, which in turn decreases the overall Hessian's top eigenvalue, ensuring convergence to a flat minimum. We apply our proposed flatness-constrained optimization to the existing FL techniques and obtain significant improvements, thereby establishing new state-of-the-art.
Similar Papers
Fairness Regularization in Federated Learning
Machine Learning (CS)
Makes AI learn fairly from everyone's data.
Optimization Methods and Software for Federated Learning
Machine Learning (CS)
Helps many phones learn together safely.
Hierarchical Federated Learning for Social Network with Mobility
Machine Learning (CS)
Learns from phones without seeing your private stuff.