Score: 0

Guiding Exploration in Reinforcement Learning Through LLM-Augmented Observations

Published: October 9, 2025 | arXiv ID: 2510.08779v1

By: Vaibhav Jain, Gerrit Grossmann

Potential Business Impact:

Helps robots learn tasks faster using smart advice.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Reinforcement Learning (RL) agents often struggle in sparse-reward environments where traditional exploration strategies fail to discover effective action sequences. Large Language Models (LLMs) possess procedural knowledge and reasoning capabilities from text pretraining that could guide RL exploration, but existing approaches create rigid dependencies where RL policies must follow LLM suggestions or incorporate them directly into reward functions. We propose a framework that provides LLM-generated action recommendations through augmented observation spaces, allowing RL agents to learn when to follow or ignore this guidance. Our method leverages LLMs' world knowledge and reasoning abilities while maintaining flexibility through soft constraints. We evaluate our approach on three BabyAI environments of increasing complexity and show that the benefits of LLM guidance scale with task difficulty. In the most challenging environment, we achieve 71% relative improvement in final success rates over baseline. The approach provides substantial sample efficiency gains, with agents reaching performance thresholds up to 9 times faster, and requires no modifications to existing RL algorithms. Our results demonstrate an effective method for leveraging LLM planning capabilities to accelerate RL training in challenging environments.

Page Count
6 pages

Category
Computer Science:
Machine Learning (CS)