Graph Query Networks for Object Detection with Automotive Radar
By: Loveneet Saini, Hasan Tercan, Tobias Meisen
Potential Business Impact:
Helps cars see better with radar.
Object detection with 3D radar is essential for 360-degree automotive perception, but radar's long wavelengths produce sparse and irregular reflections that challenge traditional grid and sequence-based convolutional and transformer detectors. This paper introduces Graph Query Networks (GQN), an attention-based framework that models objects sensed by radar as graphs, to extract individualized relational and contextual features. GQN employs a novel concept of graph queries to dynamically attend over the bird's-eye view (BEV) space, constructing object-specific graphs processed by two novel modules: EdgeFocus for relational reasoning and DeepContext Pooling for contextual aggregation. On the NuScenes dataset, GQN improves relative mAP by up to +53%, including a +8.2% gain over the strongest prior radar method, while reducing peak graph construction overhead by 80% with moderate FLOPs cost.
Similar Papers
DQ3D: Depth-guided Query for Transformer-Based 3D Object Detection in Traffic Scenarios
CV and Pattern Recognition
Helps cars see hidden objects better.
Object Detection as an Optional Basis: A Graph Matching Network for Cross-View UAV Localization
CV and Pattern Recognition
Helps drones find their way without GPS.
GraphFusion3D: Dynamic Graph Attention Convolution with Adaptive Cross-Modal Transformer for 3D Object Detection
CV and Pattern Recognition
Helps robots see and understand 3D objects better.