An Interpretability-Guided Framework for Responsible Synthetic Data Generation in Emotional Text
By: Paula Joy B. Martinez, Jose Marie Antonio Miñoza, Sebastian C. Ibañez
Potential Business Impact:
Creates fake social media posts to train AI.
Emotion recognition from social media is critical for understanding public sentiment, but accessing training data has become prohibitively expensive due to escalating API costs and platform restrictions. We introduce an interpretability-guided framework where Shapley Additive Explanations (SHAP) provide principled guidance for LLM-based synthetic data generation. With sufficient seed data, SHAP-guided approach matches real data performance, significantly outperforms naïve generation, and substantially improves classification for underrepresented emotion classes. However, our linguistic analysis reveals that synthetic text exhibits reduced vocabulary richness and fewer personal or temporally complex expressions than authentic posts. This work provides both a practical framework for responsible synthetic data generation and a critical perspective on its limitations, underscoring that the future of trustworthy AI depends on navigating the trade-offs between synthetic utility and real-world authenticity.
Similar Papers
Integration of Explainable AI Techniques with Large Language Models for Enhanced Interpretability for Sentiment Analysis
Computation and Language
Shows how computers understand feelings, layer by layer.
ContextualSHAP : Enhancing SHAP Explanations Through Contextual Language Generation
Artificial Intelligence
Explains AI decisions in simple words for everyone.
From Emotion Classification to Emotional Reasoning: Enhancing Emotional Intelligence in Large Language Models
Computation and Language
Teaches AI to understand feelings better.