Adaptable Hindsight Experience Replay for Search-Based Learning
By: Alexandros Vazaios , Jannis Brugger , Cedric Derstroff and more
Potential Business Impact:
Finds math answers by trying and learning.
AlphaZero-like Monte Carlo Tree Search systems, originally introduced for two-player games, dynamically balance exploration and exploitation using neural network guidance. This combination makes them also suitable for classical search problems. However, the original method of training the network with simulation results is limited in sparse reward settings, especially in the early stages, where the network cannot yet give guidance. Hindsight Experience Replay (HER) addresses this issue by relabeling unsuccessful trajectories from the search tree as supervised learning signals. We introduce Adaptable HER (\ours{}), a flexible framework that integrates HER with AlphaZero, allowing easy adjustments to HER properties such as relabeled goals, policy targets, and trajectory selection. Our experiments, including equation discovery, show that the possibility of modifying HER is beneficial and surpasses the performance of pure supervised or reinforcement learning.
Similar Papers
Next-Future: Sample-Efficient Policy Learning for Robotic-Arm Tasks
Robotics
Teaches robots to learn from mistakes faster.
GCHR : Goal-Conditioned Hindsight Regularization for Sample-Efficient Reinforcement Learning
Machine Learning (CS)
Teaches robots to learn faster from mistakes.
Replay Failures as Successes: Sample-Efficient Reinforcement Learning for Instruction Following
Artificial Intelligence
Teaches AI to learn from its mistakes.