Interpretable Depression Detection from Social Media Text Using LLM-Derived Embeddings
By: Samuel Kim, Oghenemaro Imieye, Yunting Yin
Potential Business Impact:
Finds sad posts to help people feel better.
Accurate and interpretable detection of depressive language in social media is useful for early interventions of mental health conditions, and has important implications for both clinical practice and broader public health efforts. In this paper, we investigate the performance of large language models (LLMs) and traditional machine learning classifiers across three classification tasks involving social media data: binary depression classification, depression severity classification, and differential diagnosis classification among depression, PTSD, and anxiety. Our study compares zero-shot LLMs with supervised classifiers trained on both conventional text embeddings and LLM-generated summary embeddings. Our experiments reveal that while zero-shot LLMs demonstrate strong generalization capabilities in binary classification, they struggle with fine-grained ordinal classifications. In contrast, classifiers trained on summary embeddings generated by LLMs demonstrate competitive, and in some cases superior, performance on the classification tasks, particularly when compared to models using traditional text embeddings. Our findings demonstrate the strengths of LLMs in mental health prediction, and suggest promising directions for better utilization of their zero-shot capabilities and context-aware summarization techniques.
Similar Papers
Generating Medically-Informed Explanations for Depression Detection using LLMs
Computation and Language
Finds depression early from online posts.
Leveraging Large Language Models for Cost-Effective, Multilingual Depression Detection and Severity Assessment
Computation and Language
Helps find depression from what people write.
A Survey of Large Language Models in Mental Health Disorder Detection on Social Media
Computation and Language
Helps find mental health problems on social media.