Heuristic Transformer: Belief Augmented In-Context Reinforcement Learning
By: Oliver Dippel, Alexei Lisitsa, Bei Peng
Potential Business Impact:
Teaches robots to learn new tasks faster.
Transformers have demonstrated exceptional in-context learning (ICL) capabilities, enabling applications across natural language processing, computer vision, and sequential decision-making. In reinforcement learning, ICL reframes learning as a supervised problem, facilitating task adaptation without parameter updates. Building on prior work leveraging transformers for sequential decision-making, we propose Heuristic Transformer (HT), an in-context reinforcement learning (ICRL) approach that augments the in-context dataset with a belief distribution over rewards to achieve better decision-making. Using a variational auto-encoder (VAE), a low-dimensional stochastic variable is learned to represent the posterior distribution over rewards, which is incorporated alongside an in-context dataset and query states as prompt to the transformer policy. We assess the performance of HT across the Darkroom, Miniworld, and MuJoCo environments, showing that it consistently surpasses comparable baselines in terms of both effectiveness and generalization. Our method presents a promising direction to bridge the gap between belief-based augmentations and transformer-based decision-making.
Similar Papers
In-Context Learning Enhanced Credibility Transformer
Machine Learning (CS)
Helps computers learn from new examples better.
On the Emergence of Induction Heads for In-Context Learning
Artificial Intelligence
Helps computers learn new things from examples.
Can Transformers Break Encryption Schemes via In-Context Learning?
Machine Learning (CS)
Teaches computers to break secret codes.