Dual-Stream Attention with Multi-Modal Queries for Object Detection in Transportation Applications
By: Noreen Anwar, Guillaume-Alexandre Bilodeau, Wassim Bouachir
Potential Business Impact:
Finds hidden objects in messy pictures better.
Transformer-based object detectors often struggle with occlusions, fine-grained localization, and computational inefficiency caused by fixed queries and dense attention. We propose DAMM, Dual-stream Attention with Multi-Modal queries, a novel framework introducing both query adaptation and structured cross-attention for improved accuracy and efficiency. DAMM capitalizes on three types of queries: appearance-based queries from vision-language models, positional queries using polygonal embeddings, and random learned queries for general scene coverage. Furthermore, a dual-stream cross-attention module separately refines semantic and spatial features, boosting localization precision in cluttered scenes. We evaluated DAMM on four challenging benchmarks, and it achieved state-of-the-art performance in average precision (AP) and recall, demonstrating the effectiveness of multi-modal query adaptation and dual-stream attention. Source code is at: \href{https://github.com/DET-LIP/DAMM}{GitHub}.
Similar Papers
FSDAM: Few-Shot Driving Attention Modeling via Vision-Language Coupling
CV and Pattern Recognition
Teaches cars where drivers look with less data.
MODA: MOdular Duplex Attention for Multimodal Perception, Cognition, and Emotion Understanding
CV and Pattern Recognition
Helps computers understand pictures and words better.
DAMap: Distance-aware MapNet for High Quality HD Map Construction
CV and Pattern Recognition
Makes self-driving cars see maps better.