PIE: Perception and Interaction Enhanced End-to-End Motion Planning for Autonomous Driving
By: Chengran Yuan , Zijian Lu , Zhanqi Zhang and more
Potential Business Impact:
Helps self-driving cars plan safer, smoother routes.
End-to-end motion planning is promising for simplifying complex autonomous driving pipelines. However, challenges such as scene understanding and effective prediction for decision-making continue to present substantial obstacles to its large-scale deployment. In this paper, we present PIE, a pioneering framework that integrates advanced perception, reasoning, and intention modeling to dynamically capture interactions between the ego vehicle and surrounding agents. It incorporates a bidirectional Mamba fusion that addresses data compression losses in multimodal fusion of camera and LiDAR inputs, alongside a novel reasoning-enhanced decoder integrating Mamba and Mixture-of-Experts to facilitate scene-compliant anchor selection and optimize adaptive trajectory inference. PIE adopts an action-motion interaction module to effectively utilize state predictions of surrounding agents to refine ego planning. The proposed framework is thoroughly validated on the NAVSIM benchmark. PIE, without using any ensemble and data augmentation techniques, achieves an 88.9 PDM score and 85.6 EPDM score, surpassing the performance of prior state-of-the-art methods. Comprehensive quantitative and qualitative analyses demonstrate that PIE is capable of reliably generating feasible and high-quality ego trajectories.
Similar Papers
Fully Unified Motion Planning for End-to-End Autonomous Driving
CV and Pattern Recognition
Teaches self-driving cars to learn from all cars.
DrivePI: Spatial-aware 4D MLLM for Unified Autonomous Driving Understanding, Perception, Prediction and Planning
CV and Pattern Recognition
Helps self-driving cars see and plan better.
Prediction-Driven Motion Planning: Route Integration Strategies in Attention-Based Prediction Models
Robotics
Helps self-driving cars plan safer routes.