Score: 1

Advancing Mental Disorder Detection: A Comparative Evaluation of Transformer and LSTM Architectures on Social Media

Published: July 17, 2025 | arXiv ID: 2507.19511v1

By: Khalid Hasan, Jamil Saquer, Mukulika Ghosh

Potential Business Impact:

Finds mental health problems from online words.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The rising prevalence of mental health disorders necessitates the development of robust, automated tools for early detection and monitoring. Recent advances in Natural Language Processing (NLP), particularly transformer-based architectures, have demonstrated significant potential in text analysis. This study provides a comprehensive evaluation of state-of-the-art transformer models (BERT, RoBERTa, DistilBERT, ALBERT, and ELECTRA) against Long Short-Term Memory (LSTM) based approaches using different text embedding techniques for mental health disorder classification on Reddit. We construct a large annotated dataset, validating its reliability through statistical judgmental analysis and topic modeling. Experimental results demonstrate the superior performance of transformer models over traditional deep-learning approaches. RoBERTa achieved the highest classification performance, with a 99.54% F1 score on the hold-out test set and a 96.05% F1 score on the external test set. Notably, LSTM models augmented with BERT embeddings proved highly competitive, achieving F1 scores exceeding 94% on the external dataset while requiring significantly fewer computational resources. These findings highlight the effectiveness of transformer-based models for real-time, scalable mental health monitoring. We discuss the implications for clinical applications and digital mental health interventions, offering insights into the capabilities and limitations of state-of-the-art NLP methodologies in mental disorder detection.

Country of Origin
🇺🇸 United States

Page Count
10 pages

Category
Computer Science:
Computation and Language