S-LAM3D: Segmentation-Guided Monocular 3D Object Detection via Feature Space Fusion
By: Diana-Alexandra Sas, Florin Oniga
Potential Business Impact:
Helps computers see 3D objects from flat pictures.
Monocular 3D Object Detection represents a challenging Computer Vision task due to the nature of the input used, which is a single 2D image, lacking in any depth cues and placing the depth estimation problem as an ill-posed one. Existing solutions leverage the information extracted from the input by using Convolutional Neural Networks or Transformer architectures as feature extraction backbones, followed by specific detection heads for 3D parameters prediction. In this paper, we introduce a decoupled strategy based on injecting precomputed segmentation information priors and fusing them directly into the feature space for guiding the detection, without expanding the detection model or jointly learning the priors. The focus is on evaluating the impact of additional segmentation information on existing detection pipelines without adding additional prediction branches. The proposed method is evaluated on the KITTI 3D Object Detection Benchmark, outperforming the equivalent architecture that relies only on RGB image features for small objects in the scene: pedestrians and cyclists, and proving that understanding the input data can balance the need for additional sensors or training data.
Similar Papers
A Multimodal Hybrid Late-Cascade Fusion Network for Enhanced 3D Object Detection
CV and Pattern Recognition
Helps cars see people and bikes better.
Sparse Multiview Open-Vocabulary 3D Detection
CV and Pattern Recognition
Lets computers see and find objects in 3D.
A Light Perspective for 3D Object Detection
CV and Pattern Recognition
Helps self-driving cars see better with less power.