Multi-task Learning with Extended Temporal Shift Module for Temporal Action Localization
By: Anh-Kiet Duong, Petra Gomez-Krämer
We present our solution to the BinEgo-360 Challenge at ICCV 2025, which focuses on temporal action localization (TAL) in multi-perspective and multi-modal video settings. The challenge provides a dataset containing panoramic, third-person, and egocentric recordings, annotated with fine-grained action classes. Our approach is built on the Temporal Shift Module (TSM), which we extend to handle TAL by introducing a background class and classifying fixed-length non-overlapping intervals. We employ a multi-task learning framework that jointly optimizes for scene classification and TAL, leveraging contextual cues between actions and environments. Finally, we integrate multiple models through a weighted ensemble strategy, which improves robustness and consistency of predictions. Our method is ranked first in both the initial and extended rounds of the competition, demonstrating the effectiveness of combining multi-task learning, an efficient backbone, and ensemble learning for TAL.
Similar Papers
An Effective End-to-End Solution for Multimodal Action Recognition
CV and Pattern Recognition
Helps computers understand actions from video and sound.
TBT-Former: Learning Temporal Boundary Distributions for Action Localization
CV and Pattern Recognition
Helps computers know exactly when actions start and end.
TempR1: Improving Temporal Understanding of MLLMs via Temporal-Aware Multi-Task Reinforcement Learning
CV and Pattern Recognition
Teaches AI to understand time in videos better.