Segment Anything, Even Occluded
By: Wei-En Tai , Yu-Lin Shih , Cheng Sun and more
Potential Business Impact:
Helps robots see hidden parts of objects.
Amodal instance segmentation, which aims to detect and segment both visible and invisible parts of objects in images, plays a crucial role in various applications including autonomous driving, robotic manipulation, and scene understanding. While existing methods require training both front-end detectors and mask decoders jointly, this approach lacks flexibility and fails to leverage the strengths of pre-existing modal detectors. To address this limitation, we propose SAMEO, a novel framework that adapts the Segment Anything Model (SAM) as a versatile mask decoder capable of interfacing with various front-end detectors to enable mask prediction even for partially occluded objects. Acknowledging the constraints of limited amodal segmentation datasets, we introduce Amodal-LVIS, a large-scale synthetic dataset comprising 300K images derived from the modal LVIS and LVVIS datasets. This dataset significantly expands the training data available for amodal segmentation research. Our experimental results demonstrate that our approach, when trained on the newly extended dataset, including Amodal-LVIS, achieves remarkable zero-shot performance on both COCOA-cls and D2SA benchmarks, highlighting its potential for generalization to unseen scenarios.
Similar Papers
MemorySAM: Memorize Modalities and Semantics with Segment Anything Model 2 for Multi-modal Semantic Segmentation
CV and Pattern Recognition
Helps computers see objects in different kinds of pictures.
X-SAM: From Segment Anything to Any Segmentation
CV and Pattern Recognition
Helps computers understand pictures like people do.
Unveiling the Invisible: Reasoning Complex Occlusions Amodally with AURA
CV and Pattern Recognition
Helps computers guess hidden object shapes and answer questions.