HELM: Human-Preferred Exploration with Language Models
By: Shuhao Liao , Xuxin Lv , Yuhong Cao and more
Potential Business Impact:
Robots learn to explore where you want them to.
In autonomous exploration tasks, robots are required to explore and map unknown environments while efficiently planning in dynamic and uncertain conditions. Given the significant variability of environments, human operators often have specific preference requirements for exploration, such as prioritizing certain areas or optimizing for different aspects of efficiency. However, existing methods struggle to accommodate these human preferences adaptively, often requiring extensive parameter tuning or network retraining. With the recent advancements in Large Language Models (LLMs), which have been widely applied to text-based planning and complex reasoning, their potential for enhancing autonomous exploration is becoming increasingly promising. Motivated by this, we propose an LLM-based human-preferred exploration framework that seamlessly integrates a mobile robot system with LLMs. By leveraging the reasoning and adaptability of LLMs, our approach enables intuitive and flexible preference control through natural language while maintaining a task success rate comparable to state-of-the-art traditional methods. Experimental results demonstrate that our framework effectively bridges the gap between human intent and policy preference in autonomous exploration, offering a more user-friendly and adaptable solution for real-world robotic applications.
Similar Papers
User Feedback Alignment for LLM-powered Exploration in Large-scale Recommendation Systems
Information Retrieval
Finds new videos you'll like, not just favorites.
From Vague Instructions to Task Plans: A Feedback-Driven HRC Task Planning Framework based on LLMs
Robotics
Robots follow your simple spoken wishes.
AuDeRe: Automated Strategy Decision and Realization in Robot Planning and Control via LLMs
Robotics
Robots learn to do new jobs by reading instructions.