sui-1: Grounded and Verifiable Long-Form Summarization
By: Benedikt Droste , Jan Philipp Harries , Maximilian Idahl and more
Large language models frequently generate plausible but unfaithful summaries that users cannot verify against source text, a critical limitation in compliance-sensitive domains such as government and legal analysis. We present sui-1, a 24B parameter model that produces abstractive summaries with inline citations, enabling users to trace each claim to its source sentence. Our synthetic data pipeline combines chain-of-thought prompting with multi-stage verification, generating over 22,000 high-quality training examples across five languages from diverse sources including parliamentary documents, web text, and Wikipedia. Evaluation shows sui-1 significantly outperforms all tested open-weight baselines, including models with 3x more parameters. These results demonstrate that task-specific training substantially outperforms scale alone for citation-grounded summarization. Model weights and an interactive demo are publicly available.
Similar Papers
Unstructured Evidence Attribution for Long Context Query Focused Summarization
Computation and Language
Helps computers find and use exact facts for summaries.
Enhancing Long Document Long Form Summarisation with Self-Planning
Computation and Language
Makes summaries of long texts more accurate.
Unstructured Evidence Attribution for Long Context Query Focused Summarization
Computation and Language
Helps AI show where it found facts.