GraphFusion3D: Dynamic Graph Attention Convolution with Adaptive Cross-Modal Transformer for 3D Object Detection
By: Md Sohag Mia , Md Nahid Hasan , Tawhid Ahmed and more
Potential Business Impact:
Helps robots see and understand 3D objects better.
Despite significant progress in 3D object detection, point clouds remain challenging due to sparse data, incomplete structures, and limited semantic information. Capturing contextual relationships between distant objects presents additional difficulties. To address these challenges, we propose GraphFusion3D, a unified framework combining multi-modal fusion with advanced feature learning. Our approach introduces the Adaptive Cross-Modal Transformer (ACMT), which adaptively integrates image features into point representations to enrich both geometric and semantic information. For proposal refinement, we introduce the Graph Reasoning Module (GRM), a novel mechanism that models neighborhood relationships to simultaneously capture local geometric structures and global semantic context. The module employs multi-scale graph attention to dynamically weight both spatial proximity and feature similarity between proposals. We further employ a cascade decoder that progressively refines detections through multi-stage predictions. Extensive experiments on SUN RGB-D (70.6\% AP$_{25}$ and 51.2\% AP$_{50}$) and ScanNetV2 (75.1\% AP$_{25}$ and 60.8\% AP$_{50}$) demonstrate a substantial performance improvement over existing approaches.
Similar Papers
A Dual-Attention Graph Network for fMRI Data Classification
Machine Learning (CS)
Helps doctors find autism using brain scans.
AG-Fusion: adaptive gated multimodal fusion for 3d object detection in complex scenes
CV and Pattern Recognition
Helps self-driving cars see better in bad weather.
DGFusion: Dual-guided Fusion for Robust Multi-Modal 3D Object Detection
CV and Pattern Recognition
Helps self-driving cars see far-away objects better.