Score: 0

LLMs as Architects and Critics for Multi-Source Opinion Summarization

Published: July 7, 2025 | arXiv ID: 2507.04751v1

By: Anuj Attri , Arnav Attri , Pushpak Bhattacharyya and more

Potential Business Impact:

Summaries tell you more about products.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Multi-source Opinion Summarization (M-OS) extends beyond traditional opinion summarization by incorporating additional sources of product metadata such as descriptions, key features, specifications, and ratings, alongside reviews. This integration results in comprehensive summaries that capture both subjective opinions and objective product attributes essential for informed decision-making. While Large Language Models (LLMs) have shown significant success in various Natural Language Processing (NLP) tasks, their potential in M-OS remains largely unexplored. Additionally, the lack of evaluation datasets for this task has impeded further advancements. To bridge this gap, we introduce M-OS-EVAL, a benchmark dataset for evaluating multi-source opinion summaries across 7 key dimensions: fluency, coherence, relevance, faithfulness, aspect coverage, sentiment consistency, specificity. Our results demonstrate that M-OS significantly enhances user engagement, as evidenced by a user study in which, on average, 87% of participants preferred M-OS over opinion summaries. Our experiments demonstrate that factually enriched summaries enhance user engagement. Notably, M-OS-PROMPTS exhibit stronger alignment with human judgment, achieving an average Spearman correlation of \r{ho} = 0.74, which surpasses the performance of previous methodologies.

Country of Origin
🇮🇳 India

Page Count
16 pages

Category
Computer Science:
Computation and Language