Score: 0

Hierarchy-Aware Fine-Tuning of Vision-Language Models

Published: December 25, 2025 | arXiv ID: 2512.21529v1

By: Jiayu Li , Rajesh Gangireddy , Samet Akcay and more

Potential Business Impact:

Teaches computers to sort things by category.

Business Areas:
Image Recognition Data and Analytics, Software

Vision-Language Models (VLMs) learn powerful multimodal representations through large-scale image-text pretraining, but adapting them to hierarchical classification is underexplored. Standard approaches treat labels as flat categories and require full fine-tuning, which is expensive and produces inconsistent predictions across taxonomy levels. We propose an efficient hierarchy-aware fine-tuning framework that updates a few parameters while enforcing structural consistency. We combine two objectives: Tree-Path KL Divergence (TP-KL) aligns predictions along the ground-truth label path for vertical coherence, while Hierarchy-Sibling Smoothed Cross-Entropy (HiSCE) encourages consistent predictions among sibling classes. Both losses work in the VLM's shared embedding space and integrate with lightweight LoRA adaptation. Experiments across multiple benchmarks show consistent improvements in Full-Path Accuracy and Tree-based Inconsistency Error with minimal parameter overhead. Our approach provides an efficient strategy for adapting VLMs to structured taxonomies.

Page Count
10 pages

Category
Computer Science:
CV and Pattern Recognition