Supervised Contrastive Frame Aggregation for Video Representation Learning
By: Shaif Chowdhury, Mushfika Rahman, Greg Hamerly
Potential Business Impact:
Teaches computers to understand videos better.
We propose a supervised contrastive learning framework for video representation learning that leverages temporally global context. We introduce a video to image aggregation strategy that spatially arranges multiple frames from each video into a single input image. This design enables the use of pre trained convolutional neural network backbones such as ResNet50 and avoids the computational overhead of complex video transformer models. We then design a contrastive learning objective that directly compares pairwise projections generated by the model. Positive pairs are defined as projections from videos sharing the same label while all other projections are treated as negatives. Multiple natural views of the same video are created using different temporal frame samplings from the same underlying video. Rather than relying on data augmentation these frame level variations produce diverse positive samples with global context and reduce overfitting. Experiments on the Penn Action and HMDB51 datasets demonstrate that the proposed method outperforms existing approaches in classification accuracy while requiring fewer computational resources. The proposed Supervised Contrastive Frame Aggregation method learns effective video representations in both supervised and self supervised settings and supports video based tasks such as classification and captioning. The method achieves seventy six percent classification accuracy on Penn Action compared to forty three percent achieved by ViVIT and forty eight percent accuracy on HMDB51 compared to thirty seven percent achieved by ViVIT.
Similar Papers
Unified Video Editing with Temporal Reasoner
CV and Pattern Recognition
Edits videos precisely without needing masks.
VideoCompressa: Data-Efficient Video Understanding via Joint Temporal Compression and Spatial Reconstruction
CV and Pattern Recognition
Makes AI learn from videos using way less data.
HFS: Holistic Query-Aware Frame Selection for Efficient Video Reasoning
CV and Pattern Recognition
Finds the most important moments in videos.