MIRA: Multimodal Iterative Reasoning Agent for Image Editing
By: Ziyun Zeng, Hang Hua, Jiebo Luo
Potential Business Impact:
Makes computer art follow your exact words.
Instruction-guided image editing offers an intuitive way for users to edit images with natural language. However, diffusion-based editing models often struggle to accurately interpret complex user instructions, especially those involving compositional relationships, contextual cues, or referring expressions, leading to edits that drift semantically or fail to reflect the intended changes. We tackle this problem by proposing MIRA (Multimodal Iterative Reasoning Agent), a lightweight, plug-and-play multimodal reasoning agent that performs editing through an iterative perception-reasoning-action loop, effectively simulating multi-turn human-model interaction processes. Instead of issuing a single prompt or static plan, MIRA predicts atomic edit instructions step by step, using visual feedback to make its decisions. Our 150K multimodal tool-use dataset, MIRA-Editing, combined with a two-stage SFT + GRPO training pipeline, enables MIRA to perform reasoning and editing over complex editing instructions. When paired with open-source image editing models such as Flux.1-Kontext, Step1X-Edit, and Qwen-Image-Edit, MIRA significantly improves both semantic consistency and perceptual quality, achieving performance comparable to or exceeding proprietary systems such as GPT-Image and Nano-Banana.
Similar Papers
When Visualizing is the First Step to Reasoning: MIRA, a Benchmark for Visual Chain-of-Thought
CV and Pattern Recognition
Helps computers "draw to think" for harder problems.
REASONEDIT: Towards Reasoning-Enhanced Image Editing Models
CV and Pattern Recognition
Makes AI better at changing pictures with words.
MIRA: Empowering One-Touch AI Services on Smartphones with MLLM-based Instruction Recommendation
Artificial Intelligence
Lets your phone suggest what AI to use.