Score: 0

Latent Expression Generation for Referring Image Segmentation and Grounding

Published: August 7, 2025 | arXiv ID: 2508.05123v2

By: Seonghoon Yu , Junbeom Hong , Joonseok Lee and more

Potential Business Impact:

Finds the right object even with tricky descriptions.

Visual grounding tasks, such as referring image segmentation (RIS) and referring expression comprehension (REC), aim to localize a target object based on a given textual description. The target object in an image can be described in multiple ways, reflecting diverse attributes such as color, position, and more. However, most existing methods rely on a single textual input, which captures only a fraction of the rich information available in the visual domain. This mismatch between rich visual details and sparse textual cues can lead to the misidentification of similar objects. To address this, we propose a novel visual grounding framework that leverages multiple latent expressions generated from a single textual input by incorporating complementary visual details absent from the original description. Specifically, we introduce subject distributor and visual concept injector modules to embed both shared-subject and distinct-attributes concepts into the latent representations, thereby capturing unique and target-specific visual cues. We also propose a positive-margin contrastive learning strategy to align all latent expressions with the original text while preserving subtle variations. Experimental results show that our method not only outperforms state-of-the-art RIS and REC approaches on multiple benchmarks but also achieves outstanding performance on the generalized referring expression segmentation (GRES) benchmark.

Country of Origin
🇰🇷 Korea, Republic of

Page Count
17 pages

Category
Computer Science:
CV and Pattern Recognition