Score: 0

Faithful Summarisation under Disagreement via Belief-Level Aggregation

Published: January 8, 2026 | arXiv ID: 2601.04889v1

By: Favour Yahdii Aghaebe , Tanefa Apekey , Elizabeth Williams and more

Potential Business Impact:

Summaries show different opinions, not just the main one.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Opinion and multi-document summarisation often involve genuinely conflicting viewpoints, yet many existing approaches, particularly LLM-based systems, implicitly smooth disagreement and over-represent majority opinions. This limits the faithfulness of generated summaries in opinion-heavy settings. We introduce a disagreement-aware synthesis pipeline that separates belief-level aggregation from language generation. Documents are first represented as structured belief sets and aggregated using distance-based belief merging operators that explicitly model conflict. Large language models are then used only to realise the aggregated beliefs as natural language summaries. We evaluate the approach across multiple model families and scales, comparing it to methods that perform explicit aggregation during generation. Our results show that while sufficiently large models can match belief-level aggregation when aggregation is handled at generation time, this behaviour is not stable across architectures or capacities. In contrast, belief-level aggregation combined with simple prompting yields consistently strong disagreement-aware performance across models, while maintaining fluent and grounded summaries.

Country of Origin
🇬🇧 United Kingdom

Page Count
16 pages

Category
Computer Science:
Computation and Language