Dynamic Fusion Multimodal Network for SpeechWellness Detection
By: Wenqiang Sun , Han Yin , Jisheng Bai and more
Potential Business Impact:
Helps find kids at risk of suicide.
Suicide is one of the leading causes of death among adolescents. Previous suicide risk prediction studies have primarily focused on either textual or acoustic information in isolation, the integration of multimodal signals, such as speech and text, offers a more comprehensive understanding of an individual's mental state. Motivated by this, and in the context of the 1st SpeechWellness detection challenge, we explore a lightweight multi-branch multimodal system based on a dynamic fusion mechanism for speechwellness detection. To address the limitation of prior approaches that rely on time-domain waveforms for acoustic analysis, our system incorporates both time-domain and time-frequency (TF) domain acoustic features, as well as semantic representations. In addition, we introduce a dynamic fusion block to adaptively integrate information from different modalities. Specifically, it applies learnable weights to each modality during the fusion process, enabling the model to adjust the contribution of each modality. To enhance computational efficiency, we design a lightweight structure by simplifying the original baseline model. Experimental results demonstrate that the proposed system exhibits superior performance compared to the challenge baseline, achieving a 78% reduction in model parameters and a 5% improvement in accuracy.
Similar Papers
Dynamic Fusion Multimodal Network for SpeechWellness Detection
Sound
Helps find kids at risk of suicide.
Suicide Risk Assessment Using Multimodal Speech Features: A Study on the SW1 Challenge Dataset
Computation and Language
Helps doctors find teens at risk of suicide.
In-context learning capabilities of Large Language Models to detect suicide risk among adolescents from speech transcripts
Audio and Speech Processing
Helps find teens at risk of suicide by listening.