Score: 0

Emotion Recognition with CLIP and Sequential Learning

Published: March 13, 2025 | arXiv ID: 2503.09929v1

By: Weiwei Zhou, Chenkun Ling, Zefeng Cai

Potential Business Impact:

Helps computers understand your feelings better.

Business Areas:
Image Recognition Data and Analytics, Software

Human emotion recognition plays a crucial role in facilitating seamless interactions between humans and computers. In this paper, we present our innovative methodology for tackling the Valence-Arousal (VA) Estimation Challenge, the Expression Recognition Challenge, and the Action Unit (AU) Detection Challenge, all within the framework of the 8th Workshop and Competition on Affective Behavior Analysis in-the-wild (ABAW). Our approach introduces a novel framework aimed at enhancing continuous emotion recognition. This is achieved by fine-tuning the CLIP model with the aff-wild2 dataset, which provides annotated expression labels. The result is a fine-tuned model that serves as an efficient visual feature extractor, significantly improving its robustness. To further boost the performance of continuous emotion recognition, we incorporate Temporal Convolutional Network (TCN) modules alongside Transformer Encoder modules into our system architecture. The integration of these advanced components allows our model to outperform baseline performance, demonstrating its ability to recognize human emotions with greater accuracy and efficiency.

Page Count
6 pages

Category
Computer Science:
CV and Pattern Recognition