LLM-Driven Intrinsic Motivation for Sparse Reward Reinforcement Learning
By: André Quadros, Cassio Silva, Ronnie Alves
Potential Business Impact:
Helps robots learn faster in tricky games.
This paper explores the combination of two intrinsic motivation strategies to improve the efficiency of reinforcement learning (RL) agents in environments with extreme sparse rewards, where traditional learning struggles due to infrequent positive feedback. We propose integrating Variational State as Intrinsic Reward (VSIMR), which uses Variational AutoEncoders (VAEs) to reward state novelty, with an intrinsic reward approach derived from Large Language Models (LLMs). The LLMs leverage their pre-trained knowledge to generate reward signals based on environment and goal descriptions, guiding the agent. We implemented this combined approach with an Actor-Critic (A2C) agent in the MiniGrid DoorKey environment, a benchmark for sparse rewards. Our empirical results show that this combined strategy significantly increases agent performance and sampling efficiency compared to using each strategy individually or a standard A2C agent, which failed to learn. Analysis of learning curves indicates that the combination effectively complements different aspects of the environment and task: VSIMR drives exploration of new states, while the LLM-derived rewards facilitate progressive exploitation towards goals.
Similar Papers
MIR: Efficient Exploration in Episodic Multi-Agent Reinforcement Learning via Mutual Intrinsic Reward
Artificial Intelligence
Helps robot teams learn to work together better.
Autonomous state-space segmentation for Deep-RL sparse reward scenarios
Machine Learning (CS)
Teaches robots to learn new skills faster.
Guiding Exploration in Reinforcement Learning Through LLM-Augmented Observations
Machine Learning (CS)
Helps robots learn tasks faster using smart advice.