SAM 3: Segment Anything with Concepts
By: Nicolas Carion , Laura Gustafson , Yuan-Ting Hu and more
Potential Business Impact:
Finds and tracks any object you describe.
We present Segment Anything Model (SAM) 3, a unified model that detects, segments, and tracks objects in images and videos based on concept prompts, which we define as either short noun phrases (e.g., "yellow school bus"), image exemplars, or a combination of both. Promptable Concept Segmentation (PCS) takes such prompts and returns segmentation masks and unique identities for all matching object instances. To advance PCS, we build a scalable data engine that produces a high-quality dataset with 4M unique concept labels, including hard negatives, across images and videos. Our model consists of an image-level detector and a memory-based video tracker that share a single backbone. Recognition and localization are decoupled with a presence head, which boosts detection accuracy. SAM 3 doubles the accuracy of existing systems in both image and video PCS, and improves previous SAM capabilities on visual segmentation tasks. We open source SAM 3 along with our new Segment Anything with Concepts (SA-Co) benchmark for promptable concept segmentation.
Similar Papers
MedSAM3: Delving into Segment Anything with Medical Concepts
CV and Pattern Recognition
Lets doctors find body parts in scans with words.
SAM3-I: Segment Anything with Instructions
CV and Pattern Recognition
Lets computers understand and follow complex instructions.
Evaluating SAM2 for Video Semantic Segmentation
CV and Pattern Recognition
Lets computers perfectly cut out any object in videos.