Decentralized Parameter-Free Online Learning
By: Tomas Ortega, Hamid Jafarkhani
Potential Business Impact:
Computers learn together without needing perfect settings.
We propose the first parameter-free decentralized online learning algorithms with network regret guarantees, which achieve sublinear regret without requiring hyperparameter tuning. This family of algorithms connects multi-agent coin-betting and decentralized online learning via gossip steps. To enable our decentralized analysis, we introduce a novel "betting function" formulation for coin-betting that simplifies the multi-agent regret analysis. Our analysis shows sublinear network regret bounds and is validated through experiments on synthetic and real datasets. This family of algorithms is applicable to distributed sensing, decentralized optimization, and collaborative ML applications.
Similar Papers
Provably Near-Optimal Distributionally Robust Reinforcement Learning in Online Settings
Machine Learning (CS)
Teaches robots to work safely in new places.
Regret Lower Bounds for Decentralized Multi-Agent Stochastic Shortest Path Problems
Machine Learning (CS)
Helps robots work together to finish tasks.
Regret Bounds for Robust Online Decision Making
Machine Learning (CS)
Helps computers learn from uncertain information.