Autonomous state-space segmentation for Deep-RL sparse reward scenarios
By: Gianluca Maselli, Vieri Giuliano Santucci
Potential Business Impact:
Teaches robots to learn new skills faster.
Dealing with environments with sparse rewards has always been crucial for systems developed to operate in autonomous open-ended learning settings. Intrinsic Motivations could be an effective way to help Deep Reinforcement Learning algorithms learn in such scenarios. In fact, intrinsic reward signals, such as novelty or curiosity, are generally adopted to improve exploration when extrinsic rewards are delayed or absent. Building on previous works, we tackle the problem of learning policies in the presence of sparse rewards by proposing a two-level architecture that alternates an ''intrinsically driven'' phase of exploration and autonomous sub-goal generation, to a phase of sparse reward, goal-directed policy learning. The idea is to build several small networks, each one specialized on a particular sub-path, and use them as starting points for future exploration without the need to further explore from scratch previously learnt paths. Two versions of the system have been trained and tested in the Gym SuperMarioBros environment without considering any additional extrinsic reward. The results show the validity of our approach and the importance of autonomously segment the environment to generate an efficient path towards the final goal.
Similar Papers
LLM-Driven Intrinsic Motivation for Sparse Reward Reinforcement Learning
Machine Learning (CS)
Helps robots learn faster in tricky games.
Less is more? Rewards in RL for Cyber Defence
Machine Learning (CS)
Trains smarter computer defenders using fewer rewards.
Towards better dense rewards in Reinforcement Learning Applications
Artificial Intelligence
Teaches robots to learn tasks faster with better rewards.