Visually Similar Pair Alignment for Robust Cross-Domain Object Detection
By: Onkar Krishna, Hiroki Ohashi
Potential Business Impact:
Makes computer vision work better in different conditions.
Domain gaps between training data (source) and real-world environments (target) often degrade the performance of object detection models. Most existing methods aim to bridge this gap by aligning features across source and target domains but often fail to account for visual differences, such as color or orientation, in alignment pairs. This limitation leads to less effective domain adaptation, as the model struggles to manage both domain-specific shifts (e.g., fog) and visual variations simultaneously. In this work, we demonstrate for the first time, using a custom-built dataset, that aligning visually similar pairs significantly improves domain adaptation. Based on this insight, we propose a novel memory-based system to enhance domain alignment. This system stores precomputed features of foreground objects and background areas from the source domain, which are periodically updated during training. By retrieving visually similar source features for alignment with target foreground and background features, the model effectively addresses domain-specific differences while reducing the impact of visual variations. Extensive experiments across diverse domain shift scenarios validate our method's effectiveness, achieving 53.1 mAP on Foggy Cityscapes and 62.3 on Sim10k, surpassing prior state-of-the-art methods by 1.2 and 4.1 mAP, respectively.
Similar Papers
Lost in Translation? Vocabulary Alignment for Source-Free Domain Adaptation in Open-Vocabulary Semantic Segmentation
CV and Pattern Recognition
Helps computers see and name objects better.
Domain Adaptive SAR Wake Detection: Leveraging Similarity Filtering and Memory Guidance
CV and Pattern Recognition
Helps ships be seen in any weather.
Efficient and robust 3D blind harmonization for large domain gaps
Image and Video Processing
Makes blurry MRI scans look clear and consistent.