COCO-Tree: Compositional Hierarchical Concept Trees for Enhanced Reasoning in Vision Language Models
By: Sanchit Sinha, Guangzhi Xiong, Aidong Zhang
Potential Business Impact:
Helps computers understand pictures with many parts.
Compositional reasoning remains a persistent weakness of modern vision language models (VLMs): they often falter when a task hinges on understanding how multiple objects, attributes, and relations interact within an image. Multiple research works have attempted to improve compositionality performance by creative tricks such as improving prompt structure, chain of thought reasoning, etc. A more recent line of work attempts to impart additional reasoning in VLMs using well-trained Large Language Models (LLMs), which are far superior in linguistic understanding than VLMs to compensate for the limited linguistic prowess of VLMs. However, these approaches are either resource-intensive or do not provide an interpretable reasoning process. In this paper, we present 'COCO-Tree' - a novel approach that augments VLM outputs with carefully designed neurosymbolic concept trees learned from LLMs to improve VLM's linguistic reasoning. COCO-Tree's beam search-inspired reasoning process boosts compositionality performance and provides a rationale behind VLM predictions. Empirical results on four compositionality benchmarks, Winoground, EqBench, ColorSwap, and SugarCrepe, in seven different open-source VLMs with varying sizes, demonstrate that COCO-Tree significantly improves compositional generalization by 5-10% over baselines.
Similar Papers
CoCoVa: Chain of Continuous Vision-Language Thought for Latent Space Reasoning
CV and Pattern Recognition
Helps computers understand pictures like people do.
Decomposing Visual Classification: Assessing Tree-Based Reasoning in VLMs
CV and Pattern Recognition
Helps computers understand pictures better with step-by-step thinking.
Composition-Grounded Instruction Synthesis for Visual Reasoning
CV and Pattern Recognition
Teaches computers to understand charts and websites.