Towards Generalized Multi-Image Editing for Unified Multimodal Models
By: Pengcheng Xu , Peng Tang , Donghao Luo and more
Potential Business Impact:
Edits multiple pictures together perfectly.
Unified Multimodal Models (UMMs) integrate multimodal understanding and generation, yet they are limited to maintaining visual consistency and disambiguating visual cues when referencing details across multiple input images. In this work, we propose a scalable multi-image editing framework for UMMs that explicitly distinguishes image identities and generalizes to variable input counts. Algorithmically, we introduce two innovations: 1) The learnable latent separators explicitly differentiate each reference image in the latent space, enabling accurate and disentangled conditioning. 2) The sinusoidal index encoding assigns visual tokens from the same image a continuous sinusoidal index embedding, which provides explicit image identity while allowing generalization and extrapolation on a variable number of inputs. To facilitate training and evaluation, we establish a high-fidelity benchmark using an inverse dataset construction methodology to guarantee artifact-free, achievable outputs. Experiments show clear improvements in semantic consistency, visual fidelity, and cross-image integration over prior baselines on diverse multi-image editing tasks, validating our advantages on consistency and generalization ability.
Similar Papers
UniModel: A Visual-Only Framework for Unified Multimodal Understanding and Generation
CV and Pattern Recognition
Makes computers see and create pictures from words.
UniVideo: Unified Understanding, Generation, and Editing for Videos
CV and Pattern Recognition
Makes videos from words, pictures, and edits them.
Uni-MMMU: A Massive Multi-discipline Multimodal Unified Benchmark
CV and Pattern Recognition
Tests how well AI can see and create.