Score: 0

Towards Mitigating Modality Bias in Vision-Language Models for Temporal Action Localization

Published: January 28, 2026 | arXiv ID: 2601.21078v1

By: Jiaqi Li , Guangming Wang , Shuntian Zheng and more

Potential Business Impact:

Helps videos show actions better by balancing words and pictures.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Temporal Action Localization (TAL) requires identifying both the boundaries and categories of actions in untrimmed videos. While vision-language models (VLMs) offer rich semantics to complement visual evidence, existing approaches tend to overemphasize linguistic priors at the expense of visual performance, leading to a pronounced modality bias. We propose ActionVLM, a vision-language aggregation framework that systematically mitigates modality bias in TAL. Our key insight is to preserve vision as the dominant signal while adaptively exploiting language only when beneficial. To this end, we introduce (i) a debiasing reweighting module that estimates the language advantage-the incremental benefit of language over vision-only predictions-and dynamically reweights language modality accordingly, and (ii) a residual aggregation strategy that treats language as a complementary refinement rather than the primary driver. This combination alleviates modality bias, reduces overconfidence from linguistic priors, and strengthens temporal reasoning. Experiments on THUMOS14 show that our model outperforms state-of-the-art by up to 3.2% mAP.

Page Count
17 pages

Category
Computer Science:
CV and Pattern Recognition