iLearnRobot: An Interactive Learning-Based Multi-Modal Robot with Continuous Improvement
By: Kohou Wang , ZhaoXiang Liu , Lin Bai and more
Potential Business Impact:
Robots learn from talking to people to get better.
It is crucial that robots' performance can be improved after deployment, as they are inherently likely to encounter novel scenarios never seen before. This paper presents an innovative solution: an interactive learning-based robot system powered by a Multi-modal Large Language Model(MLLM). A key feature of our system is its ability to learn from natural dialogues with non-expert users. We also propose chain of question to clarify the exact intent of the question before providing an answer and dual-modality retrieval modules to leverage these interaction events to avoid repeating same mistakes, ensuring a seamless user experience before model updates, which is in contrast to current mainstream MLLM-based robotic systems. Our system marks a novel approach in robotics by integrating interactive learning, paving the way for superior adaptability and performance in diverse environments. We demonstrate the effectiveness and improvement of our method through experiments, both quantitively and qualitatively.
Similar Papers
Building Knowledge from Interactions: An LLM-Based Architecture for Adaptive Tutoring and Social Reasoning
Robotics
Robots learn to teach and remember like humans.
LLM-based Interactive Imitation Learning for Robotic Manipulation
Robotics
Teaches robots using AI, not people.
Multi-Agent Systems for Robotic Autonomy with LLMs
Robotics
Builds robots that can do jobs by themselves.