Score: 0

Towards Generalized Multi-Image Editing for Unified Multimodal Models

Published: January 9, 2026 | arXiv ID: 2601.05572v1

By: Pengcheng Xu , Peng Tang , Donghao Luo and more

Potential Business Impact:

Edits multiple pictures together perfectly.

Business Areas:
Image Recognition Data and Analytics, Software

Unified Multimodal Models (UMMs) integrate multimodal understanding and generation, yet they are limited to maintaining visual consistency and disambiguating visual cues when referencing details across multiple input images. In this work, we propose a scalable multi-image editing framework for UMMs that explicitly distinguishes image identities and generalizes to variable input counts. Algorithmically, we introduce two innovations: 1) The learnable latent separators explicitly differentiate each reference image in the latent space, enabling accurate and disentangled conditioning. 2) The sinusoidal index encoding assigns visual tokens from the same image a continuous sinusoidal index embedding, which provides explicit image identity while allowing generalization and extrapolation on a variable number of inputs. To facilitate training and evaluation, we establish a high-fidelity benchmark using an inverse dataset construction methodology to guarantee artifact-free, achievable outputs. Experiments show clear improvements in semantic consistency, visual fidelity, and cross-image integration over prior baselines on diverse multi-image editing tasks, validating our advantages on consistency and generalization ability.

Page Count
21 pages

Category
Computer Science:
CV and Pattern Recognition