Refer to Anything with Vision-Language Prompts
By: Shengcao Cao , Zijun Wei , Jason Kuen and more
Potential Business Impact:
Lets computers find any object from descriptions.
Recent image segmentation models have advanced to segment images into high-quality masks for visual entities, and yet they cannot provide comprehensive semantic understanding for complex queries based on both language and vision. This limitation reduces their effectiveness in applications that require user-friendly interactions driven by vision-language prompts. To bridge this gap, we introduce a novel task of omnimodal referring expression segmentation (ORES). In this task, a model produces a group of masks based on arbitrary prompts specified by text only or text plus reference visual entities. To address this new challenge, we propose a novel framework to "Refer to Any Segmentation Mask Group" (RAS), which augments segmentation models with complex multimodal interactions and comprehension via a mask-centric large multimodal model. For training and benchmarking ORES models, we create datasets MaskGroups-2M and MaskGroups-HQ to include diverse mask groups specified by text and reference entities. Through extensive evaluation, we demonstrate superior performance of RAS on our new ORES task, as well as classic referring expression segmentation (RES) and generalized referring expression segmentation (GRES) tasks. Project page: https://Ref2Any.github.io.
Similar Papers
Towards Unified Referring Expression Segmentation Across Omni-Level Visual Target Granularities
CV and Pattern Recognition
Helps computers find specific parts of pictures.
RESAnything: Attribute Prompting for Arbitrary Referring Segmentation
CV and Pattern Recognition
Lets computers find any object or part in pictures.
R2SM: Referring and Reasoning for Selective Masks
CV and Pattern Recognition
Lets computers show hidden parts of pictures.