Latent Expression Generation for Referring Image Segmentation and Grounding
By: Seonghoon Yu , Junbeom Hong , Joonseok Lee and more
Potential Business Impact:
Finds the right object even with tricky descriptions.
Visual grounding tasks, such as referring image segmentation (RIS) and referring expression comprehension (REC), aim to localize a target object based on a given textual description. The target object in an image can be described in multiple ways, reflecting diverse attributes such as color, position, and more. However, most existing methods rely on a single textual input, which captures only a fraction of the rich information available in the visual domain. This mismatch between rich visual details and sparse textual cues can lead to the misidentification of similar objects. To address this, we propose a novel visual grounding framework that leverages multiple latent expressions generated from a single textual input by incorporating complementary visual details absent from the original description. Specifically, we introduce subject distributor and visual concept injector modules to embed both shared-subject and distinct-attributes concepts into the latent representations, thereby capturing unique and target-specific visual cues. We also propose a positive-margin contrastive learning strategy to align all latent expressions with the original text while preserving subtle variations. Experimental results show that our method not only outperforms state-of-the-art RIS and REC approaches on multiple benchmarks but also achieves outstanding performance on the generalized referring expression segmentation (GRES) benchmark.
Similar Papers
Latent Expression Generation for Referring Image Segmentation and Grounding
CV and Pattern Recognition
Finds the right object even with tricky descriptions.
Improving Generalized Visual Grounding with Instance-aware Joint Learning
CV and Pattern Recognition
Helps computers find and outline many things in pictures.
Referring Expressions as a Lens into Spatial Language Grounding in Vision-Language Models
Computation and Language
Helps computers understand where things are.