PAC-Bayesian Reinforcement Learning Trains Generalizable Policies
By: Abdelkrim Zitouni , Mehdi Hennequin , Juba Agoun and more
Potential Business Impact:
Helps robots learn faster and safer.
We derive a novel PAC-Bayesian generalization bound for reinforcement learning that explicitly accounts for Markov dependencies in the data, through the chain's mixing time. This contributes to overcoming challenges in obtaining generalization guarantees for reinforcement learning, where the sequential nature of data breaks the independence assumptions underlying classical bounds. Our bound provides non-vacuous certificates for modern off-policy algorithms like Soft Actor-Critic. We demonstrate the bound's practical utility through PB-SAC, a novel algorithm that optimizes the bound during training to guide exploration. Experiments across continuous control tasks show that our approach provides meaningful confidence certificates while maintaining competitive performance.
Similar Papers
PAC-Bayesian Generalization Bounds for Graph Convolutional Networks on Inductive Node Classification
Machine Learning (CS)
Helps computers learn from changing online connections.
Some theoretical improvements on the tightness of PAC-Bayes risk certificates for neural networks
Machine Learning (CS)
Makes AI more trustworthy and reliable.
PAC Apprenticeship Learning with Bayesian Active Inverse Reinforcement Learning
Machine Learning (CS)
Teaches robots to learn safely from few examples.