Self-supervised Pretraining for Integrated Prediction and Planning of Automated Vehicles
By: Yangang Ren , Guojian Zhan , Chen Lv and more
Potential Business Impact:
Helps self-driving cars plan safer, smarter trips.
Predicting the future of surrounding agents and accordingly planning a safe, goal-directed trajectory are crucial for automated vehicles. Current methods typically rely on imitation learning to optimize metrics against the ground truth, often overlooking how scene understanding could enable more holistic trajectories. In this paper, we propose Plan-MAE, a unified pretraining framework for prediction and planning that capitalizes on masked autoencoders. Plan-MAE fuses critical contextual understanding via three dedicated tasks: reconstructing masked road networks to learn spatial correlations, agent trajectories to model social interactions, and navigation routes to capture destination intents. To further align vehicle dynamics and safety constraints, we incorporate a local sub-planning task predicting the ego-vehicle's near-term trajectory segment conditioned on earlier segment. This pretrained model is subsequently fine-tuned on downstream tasks to jointly generate the prediction and planning trajectories. Experiments on large-scale datasets demonstrate that Plan-MAE outperforms current methods on the planning metrics by a large margin and can serve as an important pre-training step for learning-based motion planner.
Similar Papers
FloorplanMAE:A self-supervised framework for complete floorplan generation from partial inputs
Artificial Intelligence
Completes unfinished building plans automatically.
Vehicle-centric Perception via Multimodal Structured Pre-training
CV and Pattern Recognition
Teaches computers to better understand cars.
Self-Guided Masked Autoencoder
CV and Pattern Recognition
Teaches computers to see patterns faster.