GenIR: Generative Visual Feedback for Mental Image Retrieval
By: Diji Yang , Minghao Liu , Chung-Hsiang Lo and more
Potential Business Impact:
Helps find pictures from your thoughts.
Vision-language models (VLMs) have shown strong performance on text-to-image retrieval benchmarks. However, bridging this success to real-world applications remains a challenge. In practice, human search behavior is rarely a one-shot action. Instead, it is often a multi-round process guided by clues in mind, that is, a mental image ranging from vague recollections to vivid mental representations of the target image. Motivated by this gap, we study the task of Mental Image Retrieval (MIR), which targets the realistic yet underexplored setting where users refine their search for a mentally envisioned image through multi-round interactions with an image search engine. Central to successful interactive retrieval is the capability of machines to provide users with clear, actionable feedback; however, existing methods rely on indirect or abstract verbal feedback, which can be ambiguous, misleading, or ineffective for users to refine the query. To overcome this, we propose GenIR, a generative multi-round retrieval paradigm leveraging diffusion-based image generation to explicitly reify the AI system's understanding at each round. These synthetic visual representations provide clear, interpretable feedback, enabling users to refine their queries intuitively and effectively. We further introduce a fully automated pipeline to generate a high-quality multi-round MIR dataset. Experimental results demonstrate that GenIR significantly outperforms existing interactive methods in the MIR scenario. This work establishes a new task with a dataset and an effective generative retrieval method, providing a foundation for future research in this direction.
Similar Papers
Reverse-Engineering the Retrieval Process in GenIR Models
Information Retrieval
Shows how computers pick the best search results.
A Little More Like This: Text-to-Image Retrieval with Vision-Language Models Using Relevance Feedback
CV and Pattern Recognition
Improves image search by learning from results.
MIRACL-VISION: A Large, multilingual, visual document retrieval benchmark
Information Retrieval
Helps computers find information in pictures.