LLM-Empowered Embodied Agent for Memory-Augmented Task Planning in Household Robotics
By: Marc Glocker , Peter Hönig , Matthias Hirschmanner and more
Potential Business Impact:
Robot learns to clean your house by remembering tasks.
We present an embodied robotic system with an LLM-driven agent-orchestration architecture for autonomous household object management. The system integrates memory-augmented task planning, enabling robots to execute high-level user commands while tracking past actions. It employs three specialized agents: a routing agent, a task planning agent, and a knowledge base agent, each powered by task-specific LLMs. By leveraging in-context learning, our system avoids the need for explicit model training. RAG enables the system to retrieve context from past interactions, enhancing long-term object tracking. A combination of Grounded SAM and LLaMa3.2-Vision provides robust object detection, facilitating semantic scene understanding for task planning. Evaluation across three household scenarios demonstrates high task planning accuracy and an improvement in memory recall due to RAG. Specifically, Qwen2.5 yields best performance for specialized agents, while LLaMA3.1 excels in routing tasks. The source code is available at: https://github.com/marc1198/chat-hsr.
Similar Papers
Grounding Multimodal LLMs to Embodied Agents that Ask for Help with Reinforcement Learning
Artificial Intelligence
Robots learn to ask questions to do jobs better.
Large language model-based task planning for service robots: A review
Robotics
Robots learn to plan tasks using AI brains.
Mobile-Agent-RAG: Driving Smart Multi-Agent Coordination with Contextual Knowledge Empowerment for Long-Horizon Mobile Automation
Artificial Intelligence
Helps robots complete complex phone tasks better.