A new training approach for text classification in Mental Health: LatentGLoss
By: Korhan Sevinç
Potential Business Impact:
Helps computers understand mental health better.
This study presents a multi-stage approach to mental health classification by leveraging traditional machine learning algorithms, deep learning architectures, and transformer-based models. A novel data set was curated and utilized to evaluate the performance of various methods, starting with conventional classifiers and advancing through neural networks. To broaden the architectural scope, recurrent neural networks (RNNs) such as LSTM and GRU were also evaluated to explore their effectiveness in modeling sequential patterns in the data. Subsequently, transformer models such as BERT were fine-tuned to assess the impact of contextual embeddings in this domain. Beyond these baseline evaluations, the core contribution of this study lies in a novel training strategy involving a dual-model architecture composed of a teacher and a student network. Unlike standard distillation techniques, this method does not rely on soft label transfer; instead, it facilitates information flow through both the teacher model's output and its latent representations by modifying the loss function. The experimental results highlight the effectiveness of each modeling stage and demonstrate that the proposed loss function and teacher-student interaction significantly enhance the model's learning capacity in mental health prediction tasks.
Similar Papers
Mental Multi-class Classification on Social Media: Benchmarking Transformer Architectures against LSTM Models
Computation and Language
Helps computers spot different mental health issues.
Advancing Mental Disorder Detection: A Comparative Evaluation of Transformer and LSTM Architectures on Social Media
Computation and Language
Finds mental health problems from online words.
Leveraging Embedding Techniques in Multimodal Machine Learning for Mental Illness Assessment
Audio and Speech Processing
Helps computers find sadness and fear in voices.