Temporal Alignment-Free Video Matching for Few-shot Action Recognition
By: SuBeen Lee , WonJun Moon , Hyun Seok Seong and more
Potential Business Impact:
Teaches computers to recognize actions from few examples.
Few-Shot Action Recognition (FSAR) aims to train a model with only a few labeled video instances. A key challenge in FSAR is handling divergent narrative trajectories for precise video matching. While the frame- and tuple-level alignment approaches have been promising, their methods heavily rely on pre-defined and length-dependent alignment units (e.g., frames or tuples), which limits flexibility for actions of varying lengths and speeds. In this work, we introduce a novel TEmporal Alignment-free Matching (TEAM) approach, which eliminates the need for temporal units in action representation and brute-force alignment during matching. Specifically, TEAM represents each video with a fixed set of pattern tokens that capture globally discriminative clues within the video instance regardless of action length or speed, ensuring its flexibility. Furthermore, TEAM is inherently efficient, using token-wise comparisons to measure similarity between videos, unlike existing methods that rely on pairwise comparisons for temporal alignment. Additionally, we propose an adaptation process that identifies and removes common information across classes, establishing clear boundaries even between novel categories. Extensive experiments demonstrate the effectiveness of TEAM. Codes are available at github.com/leesb7426/TEAM.
Similar Papers
Joint Image-Instance Spatial-Temporal Attention for Few-shot Action Recognition
CV and Pattern Recognition
Helps computers learn new actions from few examples.
Task-Specific Distance Correlation Matching for Few-Shot Action Recognition
CV and Pattern Recognition
Teaches computers to recognize actions from few examples.
Hierarchical Relation-augmented Representation Generalization for Few-shot Action Recognition
CV and Pattern Recognition
Teaches computers to learn new actions from few examples.