Score: 0

Multi-modal Transfer Learning for Dynamic Facial Emotion Recognition in the Wild

Published: April 30, 2025 | arXiv ID: 2504.21248v1

By: Ezra Engel , Lishan Li , Chris Hudy and more

Potential Business Impact:

Helps computers understand emotions from faces better.

Business Areas:
Facial Recognition Data and Analytics, Software

Facial expression recognition (FER) is a subset of computer vision with important applications for human-computer-interaction, healthcare, and customer service. FER represents a challenging problem-space because accurate classification requires a model to differentiate between subtle changes in facial features. In this paper, we examine the use of multi-modal transfer learning to improve performance on a challenging video-based FER dataset, Dynamic Facial Expression in-the-Wild (DFEW). Using a combination of pretrained ResNets, OpenPose, and OmniVec networks, we explore the impact of cross-temporal, multi-modal features on classification accuracy. Ultimately, we find that these finely-tuned multi-modal feature generators modestly improve accuracy of our transformer-based classification model.

Country of Origin
🇺🇸 United States

Page Count
8 pages

Category
Computer Science:
CV and Pattern Recognition