Score: 0

Synthesizing Visual Concepts as Vision-Language Programs

Published: November 24, 2025 | arXiv ID: 2511.18964v1

By: Antonia Wüst , Wolfgang Stammer , Hikaru Shindo and more

Potential Business Impact:

Makes AI understand pictures and think logically.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Vision-Language models (VLMs) achieve strong performance on multimodal tasks but often fail at systematic visual reasoning tasks, leading to inconsistent or illogical outputs. Neuro-symbolic methods promise to address this by inducing interpretable logical rules, though they exploit rigid, domain-specific perception modules. We propose Vision-Language Programs (VLP), which combine the perceptual flexibility of VLMs with systematic reasoning of program synthesis. Rather than embedding reasoning inside the VLM, VLP leverages the model to produce structured visual descriptions that are compiled into neuro-symbolic programs. The resulting programs execute directly on images, remain consistent with task constraints, and provide human-interpretable explanations that enable easy shortcut mitigation. Experiments on synthetic and real-world datasets demonstrate that VLPs outperform direct and structured prompting, particularly on tasks requiring complex logical reasoning.

Page Count
36 pages

Category
Computer Science:
Artificial Intelligence