Score: 0

Interactive Multimodal Fusion with Temporal Modeling

Published: March 13, 2025 | arXiv ID: 2503.10523v1

By: Jun Yu , Yongqi Wang , Lei Wang and more

Potential Business Impact:

Lets computers guess your feelings from faces and voices.

Business Areas:
Augmented Reality Hardware, Software

This paper presents our method for the estimation of valence-arousal (VA) in the 8th Affective Behavior Analysis in-the-Wild (ABAW) competition. Our approach integrates visual and audio information through a multimodal framework. The visual branch uses a pre-trained ResNet model to extract spatial features from facial images. The audio branches employ pre-trained VGG models to extract VGGish and LogMel features from speech signals. These features undergo temporal modeling using Temporal Convolutional Networks (TCNs). We then apply cross-modal attention mechanisms, where visual features interact with audio features through query-key-value attention structures. Finally, the features are concatenated and passed through a regression layer to predict valence and arousal. Our method achieves competitive performance on the Aff-Wild2 dataset, demonstrating effective multimodal fusion for VA estimation in-the-wild.

Country of Origin
🇨🇳 China

Page Count
7 pages

Category
Computer Science:
CV and Pattern Recognition