An Evaluation of a Visual Question Answering Strategy for Zero-shot Facial Expression Recognition in Still Images
By: Modesto Castrillón-Santana , Oliverio J Santana , David Freire-Obregón and more
Potential Business Impact:
Lets computers understand faces without prior training.
Facial expression recognition (FER) is a key research area in computer vision and human-computer interaction. Despite recent advances in deep learning, challenges persist, especially in generalizing to new scenarios. In fact, zero-shot FER significantly reduces the performance of state-of-the-art FER models. To address this problem, the community has recently started to explore the integration of knowledge from Large Language Models for visual tasks. In this work, we evaluate a broad collection of locally executed Visual Language Models (VLMs), avoiding the lack of task-specific knowledge by adopting a Visual Question Answering strategy. We compare the proposed pipeline with state-of-the-art FER models, both integrating and excluding VLMs, evaluating well-known FER benchmarks: AffectNet, FERPlus, and RAF-DB. The results show excellent performance for some VLMs in zero-shot FER scenarios, indicating the need for further exploration to improve FER generalization.
Similar Papers
Self-Supervised Multi-View Representation Learning using Vision-Language Model for 3D/4D Facial Expression Recognition
CV and Pattern Recognition
Computer understands your face's feelings better.
Evaluating Open-Source Vision Language Models for Facial Emotion Recognition against Traditional Deep Learning Models
CV and Pattern Recognition
Makes computers understand emotions from blurry pictures.
Compound Expression Recognition via Large Vision-Language Models
CV and Pattern Recognition
Helps computers understand emotions from faces.