Trajectory Adaptation using Large Language Models
By: Anurag Maurya, Tashmoy Ghosh, Ravi Prakash
Potential Business Impact:
Robots follow your spoken directions for new tasks.
Adapting robot trajectories based on human instructions as per new situations is essential for achieving more intuitive and scalable human-robot interactions. This work proposes a flexible language-based framework to adapt generic robotic trajectories produced by off-the-shelf motion planners like RRT, A-star, etc, or learned from human demonstrations. We utilize pre-trained LLMs to adapt trajectory waypoints by generating code as a policy for dense robot manipulation, enabling more complex and flexible instructions than current methods. This approach allows us to incorporate a broader range of commands, including numerical inputs. Compared to state-of-the-art feature-based sequence-to-sequence models which require training, our method does not require task-specific training and offers greater interpretability and more effective feedback mechanisms. We validate our approach through simulation experiments on the robotic manipulator, aerial vehicle, and ground robot in the Pybullet and Gazebo simulation environments, demonstrating that LLMs can successfully adapt trajectories to complex human instructions.
Similar Papers
Trajectory Prediction Meets Large Language Models: A Survey
Computation and Language
Helps self-driving cars predict where things go.
Speech-to-Trajectory: Learning Human-Like Verbal Guidance for Robot Motion
Robotics
Robots understand and do what you say.
Robust Mobile Robot Path Planning via LLM-Based Dynamic Waypoint Generation
Robotics
Robots follow spoken directions to move safely.