Mars Traversability Prediction: A Multi-modal Self-supervised Approach for Costmap Generation
By: Zongwu Xie , Kaijie Yun , Yang Liu and more
Potential Business Impact:
Helps robots safely drive on new planets.
We present a robust multi-modal framework for predicting traversability costmaps for planetary rovers. Our model fuses camera and LiDAR data to produce a bird's-eye-view (BEV) terrain costmap, trained self-supervised using IMU-derived labels. Key updates include a DINOv3-based image encoder, FiLM-based sensor fusion, and an optimization loss combining Huber and smoothness terms. Experimental ablations (removing image color, occluding inputs, adding noise) show only minor changes in MAE/MSE (e.g. MAE increases from ~0.0775 to 0.0915 when LiDAR is sparsified), indicating that geometry dominates the learned cost and the model is highly robust. We attribute the small performance differences to the IMU labeling primarily reflecting terrain geometry rather than semantics and to limited data diversity. Unlike prior work claiming large gains, we emphasize our contributions: (1) a high-fidelity, reproducible simulation environment; (2) a self-supervised IMU-based labeling pipeline; and (3) a strong multi-modal BEV costmap prediction model. We discuss limitations and future work such as domain generalization and dataset expansion.
Similar Papers
Scene-Agnostic Traversability Labeling and Estimation via a Multimodal Self-supervised Framework
Robotics
Helps robots safely cross any ground.
Self-Supervised Traversability Learning with Online Prototype Adaptation for Off-Road Autonomous Driving
Robotics
Helps self-driving cars navigate rough ground safely.
Towards Zero-Shot Terrain Traversability Estimation: Challenges and Opportunities
Robotics
Lets robots judge water-crossing safety from pictures