Self-Aware Adaptive Alignment: Enabling Accurate Perception for Intelligent Transportation Systems
By: Tong Xiang , Hongxia Zhao , Fenghua Zhu and more
Potential Business Impact:
Helps self-driving cars see in different weather.
Achieving top-notch performance in Intelligent Transportation detection is a critical research area. However, many challenges still need to be addressed when it comes to detecting in a cross-domain scenario. In this paper, we propose a Self-Aware Adaptive Alignment (SA3), by leveraging an efficient alignment mechanism and recognition strategy. Our proposed method employs a specified attention-based alignment module trained on source and target domain datasets to guide the image-level features alignment process, enabling the local-global adaptive alignment between the source domain and target domain. Features from both domains, whose channel importance is re-weighted, are fed into the region proposal network, which facilitates the acquisition of salient region features. Also, we introduce an instance-to-image level alignment module specific to the target domain to adaptively mitigate the domain gap. To evaluate the proposed method, extensive experiments have been conducted on popular cross-domain object detection benchmarks. Experimental results show that SA3 achieves superior results to the previous state-of-the-art methods.
Similar Papers
Towards Real-world Lens Active Alignment with Unlabeled Data via Domain Adaptation
CV and Pattern Recognition
Makes robots build tiny lenses much faster.
Rethinking the Spatio-Temporal Alignment of End-to-End 3D Perception
CV and Pattern Recognition
Helps self-driving cars see better in bad weather.
Boosting Adversarial Transferability with Spatial Adversarial Alignment
CV and Pattern Recognition
Makes computer "hacks" work on different AI brains.