A Gold Standard Dataset and Evaluation Framework for Depression Detection and Explanation in Social Media using LLMs
By: Prajval Bolegave, Pushpak Bhattacharya
Potential Business Impact:
Finds sadness in online posts to help people.
Early detection of depression from online social media posts holds promise for providing timely mental health interventions. In this work, we present a high-quality, expert-annotated dataset of 1,017 social media posts labeled with depressive spans and mapped to 12 depression symptom categories. Unlike prior datasets that primarily offer coarse post-level labels \cite{cohan-etal-2018-smhd}, our dataset enables fine-grained evaluation of both model predictions and generated explanations. We develop an evaluation framework that leverages this clinically grounded dataset to assess the faithfulness and quality of natural language explanations generated by large language models (LLMs). Through carefully designed prompting strategies, including zero-shot and few-shot approaches with domain-adapted examples, we evaluate state-of-the-art proprietary LLMs including GPT-4.1, Gemini 2.5 Pro, and Claude 3.7 Sonnet. Our comprehensive empirical analysis reveals significant differences in how these models perform on clinical explanation tasks, with zero-shot and few-shot prompting. Our findings underscore the value of human expertise in guiding LLM behavior and offer a step toward safer, more transparent AI systems for psychological well-being.
Similar Papers
Interpretable Depression Detection from Social Media Text Using LLM-Derived Embeddings
Computation and Language
Finds sad posts to help people feel better.
Generating Medically-Informed Explanations for Depression Detection using LLMs
Computation and Language
Finds depression early from online posts.
DepressLLM: Interpretable domain-adapted language model for depression detection from real-world narratives
Computation and Language
Helps find depression from people's stories.