RCDINO: Enhancing Radar-Camera 3D Object Detection with DINOv2 Semantic Features
By: Olga Matykina, Dmitry Yudin
Potential Business Impact:
Helps cars "see" better with cameras and radar.
Three-dimensional object detection is essential for autonomous driving and robotics, relying on effective fusion of multimodal data from cameras and radar. This work proposes RCDINO, a multimodal transformer-based model that enhances visual backbone features by fusing them with semantically rich representations from the pretrained DINOv2 foundation model. This approach enriches visual representations and improves the model's detection performance while preserving compatibility with the baseline architecture. Experiments on the nuScenes dataset demonstrate that RCDINO achieves state-of-the-art performance among radar-camera models, with 56.4 NDS and 48.1 mAP. Our implementation is available at https://github.com/OlgaMatykina/RCDINO.
Similar Papers
ChangeDINO: DINOv3-Driven Building Change Detection in Optical Remote Sensing Imagery
CV and Pattern Recognition
Finds building changes in satellite pictures.
DINO-YOLO: Self-Supervised Pre-training for Data-Efficient Object Detection in Civil Engineering Applications
CV and Pattern Recognition
Finds cracks and safety gear in construction photos.
DINO-CoDT: Multi-class Collaborative Detection and Tracking with Vision Foundation Models
CV and Pattern Recognition
Helps cars see and track all road users.