FOVA: Offline Federated Reinforcement Learning with Mixed-Quality Data
By: Nan Qiao , Sheng Yue , Ju Ren and more
Potential Business Impact:
Helps AI learn better from mixed-quality data.
Offline Federated Reinforcement Learning (FRL), a marriage of federated learning and offline reinforcement learning, has attracted increasing interest recently. Albeit with some advancement, we find that the performance of most existing offline FRL methods drops dramatically when provided with mixed-quality data, that is, the logging behaviors (offline data) are collected by policies with varying qualities across clients. To overcome this limitation, this paper introduces a new vote-based offline FRL framework, named FOVA. It exploits a \emph{vote mechanism} to identify high-return actions during local policy evaluation, alleviating the negative effect of low-quality behaviors from diverse local learning policies. Besides, building on advantage-weighted regression (AWR), we construct consistent local and global training objectives, significantly enhancing the efficiency and stability of FOVA. Further, we conduct an extensive theoretical analysis and rigorously show that the policy learned by FOVA enjoys strict policy improvement over the behavioral policy. Extensive experiments corroborate the significant performance gains of our proposed algorithm over existing baselines on widely used benchmarks.
Similar Papers
Offline Meta-Reinforcement Learning with Flow-Based Task Inference and Adaptive Correction of Feature Overgeneralization
Machine Learning (CS)
Teaches robots to learn new tasks faster.
Behavior-Adaptive Q-Learning: A Unifying Framework for Offline-to-Online RL
Machine Learning (CS)
Helps robots learn safely from past mistakes.
Federated Reinforcement Learning for Runtime Optimization of AI Applications in Smart Eyewears
Artificial Intelligence
Smart glasses learn faster and work better together.