Score: 0

A Framework Combining 3D CNN and Transformer for Video-Based Behavior Recognition

Published: August 2, 2025 | arXiv ID: 2508.06528v1

By: Xiuliang Zhang, Tadiwa Elisha Nyamasvisva, Chuntao Liu

Potential Business Impact:

Helps computers understand actions in videos better.

Video-based behavior recognition is essential in fields such as public safety, intelligent surveillance, and human-computer interaction. Traditional 3D Convolutional Neural Network (3D CNN) effectively capture local spatiotemporal features but struggle with modeling long-range dependencies. Conversely, Transformers excel at learning global contextual information but face challenges with high computational costs. To address these limitations, we propose a hybrid framework combining 3D CNN and Transformer architectures. The 3D CNN module extracts low-level spatiotemporal features, while the Transformer module captures long-range temporal dependencies, with a fusion mechanism integrating both representations. Evaluated on benchmark datasets, the proposed model outperforms traditional 3D CNN and standalone Transformers, achieving higher recognition accuracy with manageable complexity. Ablation studies further validate the complementary strengths of the two modules. This hybrid framework offers an effective and scalable solution for video-based behavior recognition.

Page Count
9 pages

Category
Computer Science:
CV and Pattern Recognition