Synthesizing Visual Concepts as Vision-Language Programs
By: Antonia Wüst , Wolfgang Stammer , Hikaru Shindo and more
Potential Business Impact:
Makes AI understand pictures and think logically.
Vision-Language models (VLMs) achieve strong performance on multimodal tasks but often fail at systematic visual reasoning tasks, leading to inconsistent or illogical outputs. Neuro-symbolic methods promise to address this by inducing interpretable logical rules, though they exploit rigid, domain-specific perception modules. We propose Vision-Language Programs (VLP), which combine the perceptual flexibility of VLMs with systematic reasoning of program synthesis. Rather than embedding reasoning inside the VLM, VLP leverages the model to produce structured visual descriptions that are compiled into neuro-symbolic programs. The resulting programs execute directly on images, remain consistent with task constraints, and provide human-interpretable explanations that enable easy shortcut mitigation. Experiments on synthetic and real-world datasets demonstrate that VLPs outperform direct and structured prompting, particularly on tasks requiring complex logical reasoning.
Similar Papers
Vision language models have difficulty recognizing virtual objects
CV and Pattern Recognition
AI struggles to imagine unseen objects in pictures.
NePTune: A Neuro-Pythonic Framework for Tunable Compositional Reasoning on Vision-Language
Artificial Intelligence
Helps computers understand and solve new visual puzzles.
Concept-RuleNet: Grounded Multi-Agent Neurosymbolic Reasoning in Vision Language Models
CV and Pattern Recognition
Makes AI explain its visual guesses.