Hierarchy-Aware Fine-Tuning of Vision-Language Models
By: Jiayu Li , Rajesh Gangireddy , Samet Akcay and more
Potential Business Impact:
Teaches computers to sort things by category.
Vision-Language Models (VLMs) learn powerful multimodal representations through large-scale image-text pretraining, but adapting them to hierarchical classification is underexplored. Standard approaches treat labels as flat categories and require full fine-tuning, which is expensive and produces inconsistent predictions across taxonomy levels. We propose an efficient hierarchy-aware fine-tuning framework that updates a few parameters while enforcing structural consistency. We combine two objectives: Tree-Path KL Divergence (TP-KL) aligns predictions along the ground-truth label path for vertical coherence, while Hierarchy-Sibling Smoothed Cross-Entropy (HiSCE) encourages consistent predictions among sibling classes. Both losses work in the VLM's shared embedding space and integrate with lightweight LoRA adaptation. Experiments across multiple benchmarks show consistent improvements in Full-Path Accuracy and Tree-based Inconsistency Error with minimal parameter overhead. Our approach provides an efficient strategy for adapting VLMs to structured taxonomies.
Similar Papers
Fine-Grained VLM Fine-tuning via Latent Hierarchical Adapter Learning
CV and Pattern Recognition
Teaches computers to learn new things faster.
Dynamic Embedding of Hierarchical Visual Features for Efficient Vision-Language Fine-Tuning
CV and Pattern Recognition
Helps computers understand pictures and words better.
Representation Calibration and Uncertainty Guidance for Class-Incremental Learning based on Vision Language Model
CV and Pattern Recognition
Teaches computers to remember old and new pictures.