Learning from Self Critique and Refinement for Faithful LLM Summarization
By: Ting-Yao Hu , Hema Swetha Koppula , Hadi Pouransari and more
Potential Business Impact:
Teaches AI to write summaries without making things up.
Large Language Models (LLMs) often suffer from hallucinations: output content that is not grounded in the input context, when performing long-form text generation tasks such as summarization. Prior works have shown that hallucinations can be reduced by iteratively critiquing and refining previously generated outputs using either the same model or a more powerful teacher model as the critique. However, these approaches either require additional test-time compute or assume access to more powerful teacher models, making them costly and less practical. In this work, we propose Self Critique and Refinement-based Preference Optimization (SCRPO), which is a self-supervised training framework that first constructs a preference dataset by leveraging the LLM's own critique and refinement capabilities, and then applies preference learning to improve the same LLM for faithful summarization. Experiments on three summarization benchmarks (XSUM CNNDM and SAMSum), demonstrate that our approach outperforms state-of-the-art self-supervised learning methods in terms of faithfulness metrics while either maintaining or improving other metrics that measure the overall quality of the summary. Moreover, compared to test-time refinement, our approach not only improves efficiency but also results in more faithful summaries.
Similar Papers
Mitigating Hallucinations in Zero-Shot Scientific Summarisation: A Pilot Study
Computation and Language
Makes AI summaries of science papers more accurate.
Faithful Summarization of Consumer Health Queries: A Cross-Lingual Framework with LLMs
Computation and Language
Makes doctor notes easier to understand safely.
Self-Critique-Guided Curiosity Refinement: Enhancing Honesty and Helpfulness in Large Language Models via In-Context Learning
Computation and Language
Makes AI tell the truth and be more helpful.