REASONEDIT: Towards Reasoning-Enhanced Image Editing Models
By: Fukun Yin , Shiyu Liu , Yucheng Han and more
Potential Business Impact:
Makes AI better at changing pictures with words.
Recent advances in image editing models have shown remarkable progress. A common architectural design couples a multimodal large language model (MLLM) encoder with a diffusion decoder, as seen in systems such as Step1X-Edit and Qwen-Image-Edit, where the MLLM encodes both the reference image and the instruction but remains frozen during training. In this work, we demonstrate that unlocking the reasoning capabilities of MLLM can further push the boundaries of editing models. Specifically, we explore two reasoning mechanisms, thinking and reflection, which enhance instruction understanding and editing accuracy. Based on that, our proposed framework enables image editing in a thinking-editing-reflection loop: the thinking mechanism leverages the world knowledge of MLLM to interpret abstract instructions, while the reflection reviews editing results, automatically corrects unintended manipulations, and identifies the stopping round. Extensive experiments demonstrate that our reasoning approach achieves significant performance gains, with improvements of ImgEdit (+4.3%), GEdit (+4.7%), and Kris (+8.2%) when initializing our DiT from the Step1X-Edit (ReasonEdit-S), and also outperforms previous open-source methods on both GEdit and Kris when integrated with Qwen-Image-Edit (ReasonEdit-Q).
Similar Papers
Understanding the Implicit User Intention via Reasoning with Large Language Model for Image Editing
CV and Pattern Recognition
Makes editing pictures easier with smart instructions.
MIRA: Multimodal Iterative Reasoning Agent for Image Editing
CV and Pattern Recognition
Makes computer art follow your exact words.
Step1X-Edit: A Practical Framework for General Image Editing
CV and Pattern Recognition
Makes computer pictures change like magic.