Score: 2

MDD-Net: Multimodal Depression Detection through Mutual Transformer

Published: August 11, 2025 | arXiv ID: 2508.08093v1

By: Md Rezwanul Haque , Md. Milon Islam , S M Taslim Uddin Raju and more

Potential Business Impact:

Finds depression using voices and faces online.

Depression is a major mental health condition that severely impacts the emotional and physical well-being of individuals. The simple nature of data collection from social media platforms has attracted significant interest in properly utilizing this information for mental health research. A Multimodal Depression Detection Network (MDD-Net), utilizing acoustic and visual data obtained from social media networks, is proposed in this work where mutual transformers are exploited to efficiently extract and fuse multimodal features for efficient depression detection. The MDD-Net consists of four core modules: an acoustic feature extraction module for retrieving relevant acoustic attributes, a visual feature extraction module for extracting significant high-level patterns, a mutual transformer for computing the correlations among the generated features and fusing these features from multiple modalities, and a detection layer for detecting depression using the fused feature representations. The extensive experiments are performed using the multimodal D-Vlog dataset, and the findings reveal that the developed multimodal depression detection network surpasses the state-of-the-art by up to 17.37% for F1-Score, demonstrating the greater performance of the proposed system. The source code is accessible at https://github.com/rezwanh001/Multimodal-Depression-Detection.

Country of Origin
🇦🇪 🇨🇦 United Arab Emirates, Canada

Page Count
8 pages

Category
Computer Science:
CV and Pattern Recognition