Point-Supervised Facial Expression Spotting with Gaussian-Based Instance-Adaptive Intensity Modeling
By: Yicheng Deng, Hideaki Hayashi, Hajime Nagahara
Potential Business Impact:
Teaches computers to recognize emotions with less data.
Automatic facial expression spotting, which aims to identify facial expression instances in untrimmed videos, is crucial for facial expression analysis. Existing methods primarily focus on fully-supervised learning and rely on costly, time-consuming temporal boundary annotations. In this paper, we investigate point-supervised facial expression spotting (P-FES), where only a single timestamp annotation per instance is required for training. We propose a unique two-branch framework for P-FES. First, to mitigate the limitation of hard pseudo-labeling, which often confuses neutral and expression frames with various intensities, we propose a Gaussian-based instance-adaptive intensity modeling (GIM) module to model instance-level expression intensity distribution for soft pseudo-labeling. By detecting the pseudo-apex frame around each point label, estimating the duration, and constructing an instance-level Gaussian distribution, GIM assigns soft pseudo-labels to expression frames for more reliable intensity supervision. The GIM module is incorporated into our framework to optimize the class-agnostic expression intensity branch. Second, we design a class-aware apex classification branch that distinguishes macro- and micro-expressions solely based on their pseudo-apex frames. During inference, the two branches work independently: the class-agnostic expression intensity branch generates expression proposals, while the class-aware apex-classification branch is responsible for macro- and micro-expression classification.Furthermore, we introduce an intensity-aware contrastive loss to enhance discriminative feature learning and suppress neutral noise by contrasting neutral frames with expression frames with various intensities. Extensive experiments on the SAMM-LV, CAS(ME)$^2$, and CAS(ME)$^3$ datasets demonstrate the effectiveness of our proposed framework.
Similar Papers
Weakly Supervised Continuous Micro-Expression Intensity Estimation Using Temporal Deep Neural Network
CV and Pattern Recognition
Reads emotions from tiny face twitches.
Feature Aggregation for Efficient Continual Learning of Complex Facial Expressions
CV and Pattern Recognition
AI learns to read emotions without forgetting.
Deep Learning-Based Real-Time Sequential Facial Expression Analysis Using Geometric Features
CV and Pattern Recognition
Lets computers understand your feelings from your face.