Depth Matters: Multimodal RGB-D Perception for Robust Autonomous Agents
By: Mihaela-Larisa Clement , Mónika Farsang , Felix Resch and more
Potential Business Impact:
Cars see better with depth, driving safer.
Autonomous agents that rely purely on perception to make real-time control decisions require efficient and robust architectures. In this work, we demonstrate that augmenting RGB input with depth information significantly enhances our agents' ability to predict steering commands compared to using RGB alone. We benchmark lightweight recurrent controllers that leverage the fused RGB-D features for sequential decision-making. To train our models, we collect high-quality data using a small-scale autonomous car controlled by an expert driver via a physical steering wheel, capturing varying levels of steering difficulty. Our models, trained under diverse configurations, were successfully deployed on real hardware. Specifically, our findings reveal that the early fusion of depth data results in a highly robust controller, which remains effective even with frame drops and increased noise levels, without compromising the network's focus on the task.
Similar Papers
Depth as Points: Center Point-based Depth Estimation
CV and Pattern Recognition
Helps self-driving cars see better and faster.
Geometry-Aware Sparse Depth Sampling for High-Fidelity RGB-D Depth Completion in Robotic Systems
CV and Pattern Recognition
Makes robots see better by fixing blurry depth pictures.
DepthVision: Robust Vision-Language Understanding through GAN-Based LiDAR-to-RGB Synthesis
Robotics
Helps robots see better in the dark.