Score: 1

Unified Open-World Segmentation with Multi-Modal Prompts

Published: October 12, 2025 | arXiv ID: 2510.10524v1

By: Yang Liu , Yufei Yin , Chenchen Jing and more

Potential Business Impact:

Lets computers see anything you describe.

Business Areas:
Semantic Search Internet Services

In this work, we present COSINE, a unified open-world segmentation model that consolidates open-vocabulary segmentation and in-context segmentation with multi-modal prompts (e.g., text and image). COSINE exploits foundation models to extract representations for an input image and corresponding multi-modal prompts, and a SegDecoder to align these representations, model their interaction, and obtain masks specified by input prompts across different granularities. In this way, COSINE overcomes architectural discrepancies, divergent learning objectives, and distinct representation learning strategies of previous pipelines for open-vocabulary segmentation and in-context segmentation. Comprehensive experiments demonstrate that COSINE has significant performance improvements in both open-vocabulary and in-context segmentation tasks. Our exploratory analyses highlight that the synergistic collaboration between using visual and textual prompts leads to significantly improved generalization over single-modality approaches.

Repos / Data Links

Page Count
16 pages

Category
Computer Science:
CV and Pattern Recognition