SpotEdit: Evaluating Visually-Guided Image Editing Methods
By: Sara Ghazanfari , Wei-An Lin , Haitong Tian and more
Potential Business Impact:
Tests AI that edits pictures using words and eyes.
Visually-guided image editing, where edits are conditioned on both visual cues and textual prompts, has emerged as a powerful paradigm for fine-grained, controllable content generation. Although recent generative models have shown remarkable capabilities, existing evaluations remain simple and insufficiently representative of real-world editing challenges. We present SpotEdit, a comprehensive benchmark designed to systematically assess visually-guided image editing methods across diverse diffusion, autoregressive, and hybrid generative models, uncovering substantial performance disparities. To address a critical yet underexplored challenge, our benchmark includes a dedicated component on hallucination, highlighting how leading models, such as GPT-4o, often hallucinate the existence of a visual cue and erroneously perform the editing task. Our code and benchmark are publicly released at https://github.com/SaraGhazanfari/SpotEdit.
Similar Papers
GIE-Bench: Towards Grounded Evaluation for Text-Guided Image Editing
CV and Pattern Recognition
Tests if computer image edits match words.
Audio-Guided Visual Editing with Complex Multi-Modal Prompts
CV and Pattern Recognition
Lets you edit pictures using sounds and words.
EditInspector: A Benchmark for Evaluation of Text-Guided Image Edits
CV and Pattern Recognition
Checks if AI image edits are good.