Score: 1

MMED: A Multimodal Micro-Expression Dataset based on Audio-Visual Fusion

Published: September 18, 2025 | arXiv ID: 2509.14592v1

By: Junbo Wang , Yan Zhao , Shuo Li and more

Potential Business Impact:

Lets computers "hear" hidden feelings in voices.

Business Areas:
Motion Capture Media and Entertainment, Video

Micro-expressions (MEs) are crucial leakages of concealed emotion, yet their study has been constrained by a reliance on silent, visual-only data. To solve this issue, we introduce two principal contributions. First, MMED, to our knowledge, is the first dataset capturing the spontaneous vocal cues that co-occur with MEs in ecologically valid, high-stakes interactions. Second, the Asymmetric Multimodal Fusion Network (AMF-Net) is a novel method that effectively fuses a global visual summary with a dynamic audio sequence via an asymmetric cross-attention framework. Rigorous Leave-One-Subject-Out Cross-Validation (LOSO-CV) experiments validate our approach, providing conclusive evidence that audio offers critical, disambiguating information for ME analysis. Collectively, the MMED dataset and our AMF-Net method provide valuable resources and a validated analytical approach for micro-expression recognition.

Page Count
5 pages

Category
Computer Science:
Multimedia