RB-FT: Rationale-Bootstrapped Fine-Tuning for Video Classification
By: Meilong Xu , Di Fu , Jiaxing Zhang and more
Potential Business Impact:
Teaches computers to understand videos better.
Vision Language Models (VLMs) are becoming increasingly integral to multimedia understanding; however, they often struggle with domain-specific video classification tasks, particularly in cases with limited data. This stems from a critical \textit{rationale gap}, where sparse domain data is insufficient to bridge the semantic distance between complex spatio-temporal content and abstract classification labels. We propose a two-stage self-improvement paradigm to bridge this gap without new annotations. First, we prompt the VLMs to generate detailed textual rationales for each video, compelling them to articulate the domain-specific logic. The VLM is then fine-tuned on these self-generated rationales, utilizing this intermediate supervision to align its representations with the nuances of the target domain. Second, conventional supervised fine-tuning (SFT) is performed on the task labels, achieving markedly higher effectiveness as a result of the model's pre-acquired domain reasoning. Extensive experiments on diverse datasets demonstrate that our method significantly outperforms direct SFT, validating self-generated rationale as an effective, annotation-efficient paradigm for adapting VLMs to domain-specific video analysis.
Similar Papers
VideoRFT: Incentivizing Video Reasoning Capability in MLLMs via Reinforced Fine-Tuning
CV and Pattern Recognition
Teaches computers to understand videos like people.
RISE: Enhancing VLM Image Annotation with Self-Supervised Reasoning
Machine Learning (CS)
Teaches computers to explain *why* they see things.
Reasoning Pattern Matters: Learning to Reason without Human Rationales
Computation and Language
Lets computers learn reasoning without human examples.