LLMs as Architects and Critics for Multi-Source Opinion Summarization
By: Anuj Attri , Arnav Attri , Pushpak Bhattacharyya and more
Potential Business Impact:
Summaries tell you more about products.
Multi-source Opinion Summarization (M-OS) extends beyond traditional opinion summarization by incorporating additional sources of product metadata such as descriptions, key features, specifications, and ratings, alongside reviews. This integration results in comprehensive summaries that capture both subjective opinions and objective product attributes essential for informed decision-making. While Large Language Models (LLMs) have shown significant success in various Natural Language Processing (NLP) tasks, their potential in M-OS remains largely unexplored. Additionally, the lack of evaluation datasets for this task has impeded further advancements. To bridge this gap, we introduce M-OS-EVAL, a benchmark dataset for evaluating multi-source opinion summaries across 7 key dimensions: fluency, coherence, relevance, faithfulness, aspect coverage, sentiment consistency, specificity. Our results demonstrate that M-OS significantly enhances user engagement, as evidenced by a user study in which, on average, 87% of participants preferred M-OS over opinion summaries. Our experiments demonstrate that factually enriched summaries enhance user engagement. Notably, M-OS-PROMPTS exhibit stronger alignment with human judgment, achieving an average Spearman correlation of \r{ho} = 0.74, which surpasses the performance of previous methodologies.
Similar Papers
"This Suits You the Best": Query Focused Comparative Explainable Summarization
Computation and Language
Helps shoppers compare products with clear reasons.
Towards Multi-dimensional Evaluation of LLM Summarization across Domains and Languages
Computation and Language
Tests how well computers summarize text.
MTOS: A LLM-Driven Multi-topic Opinion Simulation Framework for Exploring Echo Chamber Dynamics
Artificial Intelligence
Shows how online opinions change across topics.