On Data Synthesis and Post-training for Visual Abstract Reasoning
By: Ke Zhu , Yu Wang , Jiangjiang Liu and more
Potential Business Impact:
Helps computers understand pictures like humans do.
This paper is a pioneering work attempting to address abstract visual reasoning (AVR) problems for large vision-language models (VLMs). We make a common LLaVA-NeXT 7B model capable of perceiving and reasoning about specific AVR problems, surpassing both open-sourced (e.g., Qwen-2-VL-72B) and closed-sourced powerful VLMs (e.g., GPT-4o) with significant margin. This is a great breakthrough since almost all previous VLMs fail or show nearly random performance on representative AVR benchmarks. Our key success is our innovative data synthesis and post-training process, aiming to fully relieve the task difficulty and elicit the model to learn, step by step. Our 7B model is also shown to be behave well on AVR without sacrificing common multimodal comprehension abilities. We hope our paper could serve as an early effort in this area and would inspire further research in abstract visual reasoning.
Similar Papers
No Labels, No Problem: Training Visual Reasoners with Multimodal Verifiers
CV and Pattern Recognition
AI learns to see and think better.
RVTBench: A Benchmark for Visual Reasoning Tasks
CV and Pattern Recognition
Teaches computers to understand videos like people.
VisuRiddles: Fine-grained Perception is a Primary Bottleneck for Multimodal Large Language Models in Abstract Visual Reasoning
CV and Pattern Recognition
Helps computers understand abstract pictures better.