Semantic-Aware Ship Detection with Vision-Language Integration
By: Jiahao Li , Jiancheng Pan , Yuze Sun and more
Potential Business Impact:
Finds ships in pictures better, even small ones.
Ship detection in remote sensing imagery is a critical task with wide-ranging applications, such as maritime activity monitoring, shipping logistics, and environmental studies. However, existing methods often struggle to capture fine-grained semantic information, limiting their effectiveness in complex scenarios. To address these challenges, we propose a novel detection framework that combines Vision-Language Models (VLMs) with a multi-scale adaptive sliding window strategy. To facilitate Semantic-Aware Ship Detection (SASD), we introduce ShipSem-VL, a specialized Vision-Language dataset designed to capture fine-grained ship attributes. We evaluate our framework through three well-defined tasks, providing a comprehensive analysis of its performance and demonstrating its effectiveness in advancing SASD from multiple perspectives.
Similar Papers
Enhancing, Refining, and Fusing: Towards Robust Multi-Scale and Dense Ship Detection
CV and Pattern Recognition
Finds many ships, even when close together.
Spatial-aware Vision Language Model for Autonomous Driving
CV and Pattern Recognition
Helps self-driving cars see in 3D.
SATGround: A Spatially-Aware Approach for Visual Grounding in Remote Sensing
CV and Pattern Recognition
Finds things in satellite pictures using words.