Towards Mitigating Modality Bias in Vision-Language Models for Temporal Action Localization
By: Jiaqi Li , Guangming Wang , Shuntian Zheng and more
Potential Business Impact:
Helps videos show actions better by balancing words and pictures.
Temporal Action Localization (TAL) requires identifying both the boundaries and categories of actions in untrimmed videos. While vision-language models (VLMs) offer rich semantics to complement visual evidence, existing approaches tend to overemphasize linguistic priors at the expense of visual performance, leading to a pronounced modality bias. We propose ActionVLM, a vision-language aggregation framework that systematically mitigates modality bias in TAL. Our key insight is to preserve vision as the dominant signal while adaptively exploiting language only when beneficial. To this end, we introduce (i) a debiasing reweighting module that estimates the language advantage-the incremental benefit of language over vision-only predictions-and dynamically reweights language modality accordingly, and (ii) a residual aggregation strategy that treats language as a complementary refinement rather than the primary driver. This combination alleviates modality bias, reduces overconfidence from linguistic priors, and strengthens temporal reasoning. Experiments on THUMOS14 show that our model outperforms state-of-the-art by up to 3.2% mAP.
Similar Papers
Vision-Language Models Unlock Task-Centric Latent Actions
Machine Learning (CS)
Teaches robots to ignore distractions and learn better.
Seeing to Act, Prompting to Specify: A Bayesian Factorization of Vision Language Action Policy
Robotics
Helps robots learn new tasks from instructions.
VT-LVLM-AR: A Video-Temporal Large Vision-Language Model Adapter for Fine-Grained Action Recognition in Long-Term Videos
CV and Pattern Recognition
Helps computers understand actions in videos better.