From Words to Wavelengths: VLMs for Few-Shot Multispectral Object Detection
By: Manuel Nkegoum , Minh-Tan Pham , Élisa Fromont and more
Multispectral object detection is critical for safety-sensitive applications such as autonomous driving and surveillance, where robust perception under diverse illumination conditions is essential. However, the limited availability of annotated multispectral data severely restricts the training of deep detectors. In such data-scarce scenarios, textual class information can serve as a valuable source of semantic supervision. Motivated by the recent success of Vision-Language Models (VLMs) in computer vision, we explore their potential for few-shot multispectral object detection. Specifically, we adapt two representative VLM-based detectors, Grounding DINO and YOLO-World, to handle multispectral inputs and propose an effective mechanism to integrate text, visual and thermal modalities. Through extensive experiments on two popular multispectral image benchmarks, FLIR and M3FD, we demonstrate that VLM-based detectors not only excel in few-shot regimes, significantly outperforming specialized multispectral models trained with comparable data, but also achieve competitive or superior results under fully supervised settings. Our findings reveal that the semantic priors learned by large-scale VLMs effectively transfer to unseen spectral modalities, ofFering a powerful pathway toward data-efficient multispectral perception.
Similar Papers
Vision-Language Models for Infrared Industrial Sensing in Additive Manufacturing Scene Description
CV and Pattern Recognition
Lets AI see in the dark using heat.
Efficient Few-Shot Learning in Remote Sensing: Fusing Vision and Vision-Language Models
CV and Pattern Recognition
Finds planes in pictures better, even blurry ones.
Object Detection with Multimodal Large Vision-Language Models: An In-depth Review
CV and Pattern Recognition
Lets computers see and understand pictures better.