Score: 0

Mars Traversability Prediction: A Multi-modal Self-supervised Approach for Costmap Generation

Published: September 14, 2025 | arXiv ID: 2509.11082v1

By: Zongwu Xie , Kaijie Yun , Yang Liu and more

Potential Business Impact:

Helps robots safely drive on new planets.

Business Areas:
Autonomous Vehicles Transportation

We present a robust multi-modal framework for predicting traversability costmaps for planetary rovers. Our model fuses camera and LiDAR data to produce a bird's-eye-view (BEV) terrain costmap, trained self-supervised using IMU-derived labels. Key updates include a DINOv3-based image encoder, FiLM-based sensor fusion, and an optimization loss combining Huber and smoothness terms. Experimental ablations (removing image color, occluding inputs, adding noise) show only minor changes in MAE/MSE (e.g. MAE increases from ~0.0775 to 0.0915 when LiDAR is sparsified), indicating that geometry dominates the learned cost and the model is highly robust. We attribute the small performance differences to the IMU labeling primarily reflecting terrain geometry rather than semantics and to limited data diversity. Unlike prior work claiming large gains, we emphasize our contributions: (1) a high-fidelity, reproducible simulation environment; (2) a self-supervised IMU-based labeling pipeline; and (3) a strong multi-modal BEV costmap prediction model. We discuss limitations and future work such as domain generalization and dataset expansion.

Country of Origin
🇨🇳 China

Page Count
8 pages

Category
Computer Science:
CV and Pattern Recognition