Score: 2

High Accuracy, Less Talk (HALT): Reliable LLMs through Capability-Aligned Finetuning

Published: June 4, 2025 | arXiv ID: 2506.04051v1

By: Tim Franzmeyer , Archie Sravankumar , Lijuan Liu and more

BigTech Affiliations: Meta

Potential Business Impact:

Makes AI say "I don't know" when unsure.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large Language Models (LLMs) currently respond to every prompt. However, they can produce incorrect answers when they lack knowledge or capability -- a problem known as hallucination. We instead propose post-training an LLM to generate content only when confident in its correctness and to otherwise (partially) abstain. Specifically, our method, HALT, produces capability-aligned post-training data that encodes what the model can and cannot reliably generate. We generate this data by splitting responses of the pretrained LLM into factual fragments (atomic statements or reasoning steps), and use ground truth information to identify incorrect fragments. We achieve capability-aligned finetuning responses by either removing incorrect fragments or replacing them with "Unsure from Here" -- according to a tunable threshold that allows practitioners to trade off response completeness and mean correctness of the response's fragments. We finetune four open-source models for biography writing, mathematics, coding, and medicine with HALT for three different trade-off thresholds. HALT effectively trades off response completeness for correctness, increasing the mean correctness of response fragments by 15% on average, while resulting in a 4% improvement in the F1 score (mean of completeness and correctness of the response) compared to the relevant baselines. By tuning HALT for highest correctness, we train a single reliable Llama3-70B model with correctness increased from 51% to 87% across all four domains while maintaining 53% of the response completeness achieved with standard finetuning.

Country of Origin
πŸ‡¬πŸ‡§ πŸ‡ΊπŸ‡Έ United Kingdom, United States

Page Count
16 pages

Category
Computer Science:
Computation and Language