Heterogeneous Federated Reinforcement Learning Using Wasserstein Barycenters
By: Luiz Pereira, M. Hadi Amini
Potential Business Impact:
Teaches AI to learn from many separate computers.
In this paper, we first propose a novel algorithm for model fusion that leverages Wasserstein barycenters in training a global Deep Neural Network (DNN) in a distributed architecture. To this end, we divide the dataset into equal parts that are fed to "agents" who have identical deep neural networks and train only over the dataset fed to them (known as the local dataset). After some training iterations, we perform an aggregation step where we combine the weight parameters of all neural networks using Wasserstein barycenters. These steps form the proposed algorithm referred to as FedWB. Moreover, we leverage the processes created in the first part of the paper to develop an algorithm to tackle Heterogeneous Federated Reinforcement Learning (HFRL). Our test experiment is the CartPole toy problem, where we vary the lengths of the poles to create heterogeneous environments. We train a deep Q-Network (DQN) in each environment to learn to control each cart, while occasionally performing a global aggregation step to generalize the local models; the end outcome is a global DQN that functions across all environments.
Similar Papers
Wasserstein-Barycenter Consensus for Cooperative Multi-Agent Reinforcement Learning
Systems and Control
Teaches robots to work together better.
Collaborative Bayesian Optimization via Wasserstein Barycenters
Machine Learning (CS)
Helps computers learn secrets without sharing data.
A Novel Algorithm for Personalized Federated Learning: Knowledge Distillation with Weighted Combination Loss
Machine Learning (Stat)
Teaches computers to learn from private data better.