DepthVision: Robust Vision-Language Understanding through GAN-Based LiDAR-to-RGB Synthesis
By: Sven Kirchner , Nils Purschke , Ross Greer and more
Potential Business Impact:
Helps robots see better in the dark.
Ensuring reliable robot operation when visual input is degraded or insufficient remains a central challenge in robotics. This letter introduces DepthVision, a framework for multimodal scene understanding designed to address this problem. Unlike existing Vision-Language Models (VLMs), which use only camera-based visual input alongside language, DepthVision synthesizes RGB images from sparse LiDAR point clouds using a conditional generative adversarial network (GAN) with an integrated refiner network. These synthetic views are then combined with real RGB data using a Luminance-Aware Modality Adaptation (LAMA), which blends the two types of data dynamically based on ambient lighting conditions. This approach compensates for sensor degradation, such as darkness or motion blur, without requiring any fine-tuning of downstream vision-language models. We evaluate DepthVision on real and simulated datasets across various models and tasks, with particular attention to safety-critical tasks. The results demonstrate that our approach improves performance in low-light conditions, achieving substantial gains over RGB-only baselines while preserving compatibility with frozen VLMs. This work highlights the potential of LiDAR-guided RGB synthesis for achieving robust robot operation in real-world environments.
Similar Papers
MonoDream: Monocular Vision-Language Navigation with Panoramic Dreaming
CV and Pattern Recognition
Helps robots navigate using just a single camera.
DGFusion: Depth-Guided Sensor Fusion for Robust Semantic Perception
CV and Pattern Recognition
Helps self-driving cars see better in bad weather.
MDE-AgriVLN: Agricultural Vision-and-Language Navigation with Monocular Depth Estimation
Robotics
Robots follow spoken directions to farm crops.