PairEdit: Learning Semantic Variations for Exemplar-based Image Editing
By: Haoguang Lu , Jiacheng Chen , Zhenguo Yang and more
Potential Business Impact:
Teaches computers to edit pictures using examples.
Recent advancements in text-guided image editing have achieved notable success by leveraging natural language prompts for fine-grained semantic control. However, certain editing semantics are challenging to specify precisely using textual descriptions alone. A practical alternative involves learning editing semantics from paired source-target examples. Existing exemplar-based editing methods still rely on text prompts describing the change within paired examples or learning implicit text-based editing instructions. In this paper, we introduce PairEdit, a novel visual editing method designed to effectively learn complex editing semantics from a limited number of image pairs or even a single image pair, without using any textual guidance. We propose a target noise prediction that explicitly models semantic variations within paired images through a guidance direction term. Moreover, we introduce a content-preserving noise schedule to facilitate more effective semantic learning. We also propose optimizing distinct LoRAs to disentangle the learning of semantic variations from content. Extensive qualitative and quantitative evaluations demonstrate that PairEdit successfully learns intricate semantics while significantly improving content consistency compared to baseline methods. Code will be available at https://github.com/xudonmao/PairEdit.
Similar Papers
Learning an Image Editing Model without Image Editing Pairs
CV and Pattern Recognition
Teaches computers to edit pictures without examples.
Edit Transfer: Learning Image Editing via Vision In-Context Relations
CV and Pattern Recognition
Changes pictures using just one example.
SuperEdit: Rectifying and Facilitating Supervision for Instruction-Based Image Editing
CV and Pattern Recognition
Teaches computers to edit pictures better with clearer instructions.