Improving Narrative Classification and Explanation via Fine Tuned Language Models
By: Rishit Tyagi, Rahul Bouri, Mohit Gupta
Potential Business Impact:
Finds hidden messages and explains them clearly.
Understanding covert narratives and implicit messaging is essential for analyzing bias and sentiment. Traditional NLP methods struggle with detecting subtle phrasing and hidden agendas. This study tackles two key challenges: (1) multi-label classification of narratives and sub-narratives in news articles, and (2) generating concise, evidence-based explanations for dominant narratives. We fine-tune a BERT model with a recall-oriented approach for comprehensive narrative detection, refining predictions using a GPT-4o pipeline for consistency. For narrative explanation, we propose a ReACT (Reasoning + Acting) framework with semantic retrieval-based few-shot prompting, ensuring grounded and relevant justifications. To enhance factual accuracy and reduce hallucinations, we incorporate a structured taxonomy table as an auxiliary knowledge base. Our results show that integrating auxiliary knowledge in prompts improves classification accuracy and justification reliability, with applications in media analysis, education, and intelligence gathering.
Similar Papers
Fine-grained Narrative Classification in Biased News Articles
Computation and Language
Finds hidden stories that sway people's opinions.
Improving Crash Data Quality with Large Language Models: Evidence from Secondary Crash Narratives in Kentucky
Computation and Language
Finds hidden car crash causes in police reports.
Regularization Through Reasoning: Systematic Improvements in Language Model Classification via Explanation-Enhanced Fine-Tuning
Machine Learning (CS)
Makes AI better at choosing the right answer.