Multimodal Referring Segmentation: A Survey
By: Henghui Ding , Song Tang , Shuting He and more
Potential Business Impact:
Helps computers find objects from spoken words.
Multimodal referring segmentation aims to segment target objects in visual scenes, such as images, videos, and 3D scenes, based on referring expressions in text or audio format. This task plays a crucial role in practical applications requiring accurate object perception based on user instructions. Over the past decade, it has gained significant attention in the multimodal community, driven by advances in convolutional neural networks, transformers, and large language models, all of which have substantially improved multimodal perception capabilities. This paper provides a comprehensive survey of multimodal referring segmentation. We begin by introducing this field's background, including problem definitions and commonly used datasets. Next, we summarize a unified meta architecture for referring segmentation and review representative methods across three primary visual scenes, including images, videos, and 3D scenes. We further discuss Generalized Referring Expression (GREx) methods to address the challenges of real-world complexity, along with related tasks and practical applications. Extensive performance comparisons on standard benchmarks are also provided. We continually track related works at https://github.com/henghuiding/Awesome-Multimodal-Referring-Segmentation.
Similar Papers
Multimodal Referring Segmentation: A Survey
CV and Pattern Recognition
Lets computers find things you describe.
Towards Agentic AI for Multimodal-Guided Video Object Segmentation
CV and Pattern Recognition
Helps computers find objects in videos using words.
Latent Expression Generation for Referring Image Segmentation and Grounding
CV and Pattern Recognition
Finds the right object even with tricky descriptions.