Emotion Recognition with CLIP and Sequential Learning
By: Weiwei Zhou, Chenkun Ling, Zefeng Cai
Potential Business Impact:
Helps computers understand your feelings better.
Human emotion recognition plays a crucial role in facilitating seamless interactions between humans and computers. In this paper, we present our innovative methodology for tackling the Valence-Arousal (VA) Estimation Challenge, the Expression Recognition Challenge, and the Action Unit (AU) Detection Challenge, all within the framework of the 8th Workshop and Competition on Affective Behavior Analysis in-the-wild (ABAW). Our approach introduces a novel framework aimed at enhancing continuous emotion recognition. This is achieved by fine-tuning the CLIP model with the aff-wild2 dataset, which provides annotated expression labels. The result is a fine-tuned model that serves as an efficient visual feature extractor, significantly improving its robustness. To further boost the performance of continuous emotion recognition, we incorporate Temporal Convolutional Network (TCN) modules alongside Transformer Encoder modules into our system architecture. The integration of these advanced components allows our model to outperform baseline performance, demonstrating its ability to recognize human emotions with greater accuracy and efficiency.
Similar Papers
Enhancing Facial Expression Recognition through Dual-Direction Attention Mixed Feature Networks and CLIP: Application to 8th ABAW Challenge
CV and Pattern Recognition
Helps computers understand emotions and faces better.
Technical Approach for the EMI Challenge in the 8th Affective Behavior Analysis in-the-Wild Competition
CV and Pattern Recognition
Helps computers understand emotions from faces and voices.
Interactive Multimodal Fusion with Temporal Modeling
CV and Pattern Recognition
Lets computers guess your feelings from faces and voices.