VQ-VA World: Towards High-Quality Visual Question-Visual Answering
By: Chenhui Gou , Zilong Chen , Zeyu Wang and more
Potential Business Impact:
Makes computers draw pictures from questions.
This paper studies Visual Question-Visual Answering (VQ-VA): generating an image, rather than text, in response to a visual question -- an ability that has recently emerged in proprietary systems such as NanoBanana and GPT-Image. To also bring this capability to open-source models, we introduce VQ-VA World, a data-centric framework built around an agentic pipeline for large-scale, targeted data construction. Leveraging web-scale deployment, this pipeline crawls a massive amount of ~1.8M high-quality, interleaved image-text samples for model training. For evaluation, we further release IntelligentBench, a human-curated benchmark that systematically assesses VQ-VA along the aspects of world knowledge, design knowledge, and reasoning. Training with VQ-VA World data yields strong empirical gains: it helps LightFusion attain 53.06 on IntelligentBench, substantially surpassing the best prior open-source baselines (i.e., 7.78 from vanilla LightFusion; 1.94 from UniWorld-V1), and significantly narrowing the gap toward leading proprietary systems (e.g., 81.67 from NanoBanana; 82.64 from GPT-Image). By releasing the full suite of model weights, datasets, and pipelines, we hope to stimulate future research on VQ-VA.
Similar Papers
FlipVQA-Miner: Cross-Page Visual Question-Answer Mining from Textbooks
Artificial Intelligence
Makes AI smarter using old school books.
WearVQA: A Visual Question Answering Benchmark for Wearables in Egocentric Authentic Real-world scenarios
Artificial Intelligence
Tests smart glasses AI to answer questions about what you see.
Text-VQA Aug: Pipelined Harnessing of Large Multimodal Models for Automated Synthesis
CV and Pattern Recognition
Computers can now answer questions about text in pictures.