Enhanced Motion Forecasting with Plug-and-Play Multimodal Large Language Models
By: Katie Luo , Jingwei Ji , Tong He and more
Potential Business Impact:
Helps self-driving cars predict what others will do.
Current autonomous driving systems rely on specialized models for perceiving and predicting motion, which demonstrate reliable performance in standard conditions. However, generalizing cost-effectively to diverse real-world scenarios remains a significant challenge. To address this, we propose Plug-and-Forecast (PnF), a plug-and-play approach that augments existing motion forecasting models with multimodal large language models (MLLMs). PnF builds on the insight that natural language provides a more effective way to describe and handle complex scenarios, enabling quick adaptation to targeted behaviors. We design prompts to extract structured scene understanding from MLLMs and distill this information into learnable embeddings to augment existing behavior prediction models. Our method leverages the zero-shot reasoning capabilities of MLLMs to achieve significant improvements in motion prediction performance, while requiring no fine-tuning -- making it practical to adopt. We validate our approach on two state-of-the-art motion forecasting models using the Waymo Open Motion Dataset and the nuScenes Dataset, demonstrating consistent performance improvements across both benchmarks.
Similar Papers
Large Foundation Models for Trajectory Prediction in Autonomous Driving: A Comprehensive Survey
Robotics
Helps self-driving cars predict where others will go.
PerFACT: Motion Policy with LLM-Powered Dataset Synthesis and Fusion Action-Chunking Transformers
Robotics
Robots learn to move faster in new places.
VLMPlanner: Integrating Visual Language Models with Motion Planning
Artificial Intelligence
Helps self-driving cars see and react better.