In-Context Reinforcement Learning through Bayesian Fusion of Context and Value Prior
By: Anaïs Berkes , Vincent Taboga , Donna Vakalis and more
Potential Business Impact:
Helps robots learn new tasks super fast.
In-context reinforcement learning (ICRL) promises fast adaptation to unseen environments without parameter updates, but current methods either cannot improve beyond the training distribution or require near-optimal data, limiting practical adoption. We introduce SPICE, a Bayesian ICRL method that learns a prior over Q-values via deep ensemble and updates this prior at test-time using in-context information through Bayesian updates. To recover from poor priors resulting from training on sub-optimal data, our online inference follows an Upper-Confidence Bound rule that favours exploration and adaptation. We prove that SPICE achieves regret-optimal behaviour in both stochastic bandits and finite-horizon MDPs, even when pretrained only on suboptimal trajectories. We validate these findings empirically across bandit and control benchmarks. SPICE achieves near-optimal decisions on unseen tasks, substantially reduces regret compared to prior ICRL and meta-RL approaches while rapidly adapting to unseen tasks and remaining robust under distribution shift.
Similar Papers
Scalable In-Context Q-Learning
Artificial Intelligence
Teaches computers to learn from bad examples.
In-Context Learning Is Provably Bayesian Inference: A Generalization Theory for Meta-Learning
Machine Learning (Stat)
Teaches computers to learn new tasks faster.
Towards Provable Emergence of In-Context Reinforcement Learning
Machine Learning (CS)
Lets AI learn new things without retraining.