Automated Data Enrichment using Confidence-Aware Fine-Grained Debate among Open-Source LLMs for Mental Health and Online Safety
By: Junyu Mao , Anthony Hills , Talia Tseriotou and more
Potential Business Impact:
Teaches computers to understand real-life events better.
Real-world indicators are important for improving natural language processing (NLP) tasks such as life events for mental health analysis and risky behaviour for online safety, yet labelling such information in NLP training datasets is often costly and/or difficult given the dynamic nature of such events. This paper compares several LLM-based data enrichment methods and introduces a novel Confidence-Aware Fine-Grained Debate (CFD) framework in which multiple LLM agents simulate human annotators and exchange fine-grained evidence to reach consensus. We describe two new expert-annotated datasets, a mental health Reddit wellbeing dataset and an online safety Facebook sharenting risk dataset. Our CFD framework achieves the most robust data enrichment performance compared to a range of baselines and we show that this type of data enrichment consistently improves downstream tasks. Enriched features incorporated via debate transcripts yield the largest gains, outperforming the non-enriched baseline by 10.1% for the online safety task.
Similar Papers
Leveraging LLMs for Mental Health: Detection and Recommendations from Social Discussions
Social and Information Networks
Helps find mental health problems from online posts.
FedMentalCare: Towards Privacy-Preserving Fine-Tuned LLMs to Analyze Mental Health Status Using Federated Learning Framework
Computation and Language
Keeps mental health chats private for AI.
DialogGuard: Multi-Agent Psychosocial Safety Evaluation of Sensitive LLM Responses
Artificial Intelligence
Tests AI for safe and helpful online chats.