Score: 1

Dynamic Fusion Multimodal Network for SpeechWellness Detection

Published: August 25, 2025 | arXiv ID: 2508.18057v1

By: Wenqiang Sun , Han Yin , Jisheng Bai and more

Potential Business Impact:

Helps find kids at risk of suicide.

Business Areas:
Speech Recognition Data and Analytics, Software

Suicide is one of the leading causes of death among adolescents. Previous suicide risk prediction studies have primarily focused on either textual or acoustic information in isolation, the integration of multimodal signals, such as speech and text, offers a more comprehensive understanding of an individual's mental state. Motivated by this, and in the context of the 1st SpeechWellness detection challenge, we explore a lightweight multi-branch multimodal system based on a dynamic fusion mechanism for speechwellness detection. To address the limitation of prior approaches that rely on time-domain waveforms for acoustic analysis, our system incorporates both time-domain and time-frequency (TF) domain acoustic features, as well as semantic representations. In addition, we introduce a dynamic fusion block to adaptively integrate information from different modalities. Specifically, it applies learnable weights to each modality during the fusion process, enabling the model to adjust the contribution of each modality. To enhance computational efficiency, we design a lightweight structure by simplifying the original baseline model. Experimental results demonstrate that the proposed system exhibits superior performance compared to the challenge baseline, achieving a 78% reduction in model parameters and a 5% improvement in accuracy.

Country of Origin
🇰🇷 🇨🇳 China, Korea, Republic of

Page Count
6 pages

Category
Computer Science:
Sound