ARGenSeg: Image Segmentation with Autoregressive Image Generation Model
By: Xiaolong Wang , Lixiang Ru , Ziyuan Huang and more
Potential Business Impact:
Lets computers see and understand pictures perfectly.
We propose a novel AutoRegressive Generation-based paradigm for image Segmentation (ARGenSeg), achieving multimodal understanding and pixel-level perception within a unified framework. Prior works integrating image segmentation into multimodal large language models (MLLMs) typically employ either boundary points representation or dedicated segmentation heads. These methods rely on discrete representations or semantic prompts fed into task-specific decoders, which limits the ability of the MLLM to capture fine-grained visual details. To address these challenges, we introduce a segmentation framework for MLLM based on image generation, which naturally produces dense masks for target objects. We leverage MLLM to output visual tokens and detokenize them into images using an universal VQ-VAE, making the segmentation fully dependent on the pixel-level understanding of the MLLM. To reduce inference latency, we employ a next-scale-prediction strategy to generate required visual tokens in parallel. Extensive experiments demonstrate that our method surpasses prior state-of-the-art approaches on multiple segmentation datasets with a remarkable boost in inference speed, while maintaining strong understanding capabilities.
Similar Papers
Seg-VAR: Image Segmentation with Visual Autoregressive Modeling
CV and Pattern Recognition
Makes computers perfectly outline any object in pictures.
Understand Before You Generate: Self-Guided Training for Autoregressive Image Generation
CV and Pattern Recognition
Makes AI better at understanding and creating pictures.
Personalized Text-to-Image Generation with Auto-Regressive Models
CV and Pattern Recognition
Makes AI draw pictures of *your* stuff.