Decomposing Visual Classification: Assessing Tree-Based Reasoning in VLMs
By: Sary Elmansoury , Islam Mesabah , Gerrit Großmann and more
Potential Business Impact:
Helps computers understand pictures better with step-by-step thinking.
Vision language models (VLMs) excel at zero-shot visual classification, but their performance on fine-grained tasks and large hierarchical label spaces is understudied. This paper investigates whether structured, tree-based reasoning can enhance VLM performance. We introduce a framework that decomposes classification into interpretable decisions using decision trees and evaluates it on fine-grained (GTSRB) and coarse-grained (CIFAR-10) datasets. Although the model achieves 98.2% accuracy in understanding the tree knowledge, tree-based reasoning consistently underperforms standard zero-shot prompting. We also explore enhancing the tree prompts with LLM-generated classes and image descriptions to improve alignment. The added description enhances the performance of the tree-based and zero-shot methods. Our findings highlight limitations of structured reasoning in visual classification and offer insights for designing more interpretable VLM systems.
Similar Papers
Perceiving, Reasoning, Adapting: A Dual-Layer Framework for VLM-Guided Precision Robotic Manipulation
Robotics
Robots learn to do tricky jobs with speed and accuracy.
COCO-Tree: Compositional Hierarchical Concept Trees for Enhanced Reasoning in Vision Language Models
CV and Pattern Recognition
Helps computers understand pictures with many parts.
VisuoThink: Empowering LVLM Reasoning with Multimodal Tree Search
Computation and Language
Helps computers think through pictures and words.