Score: 1

FLEx: Language Modeling with Few-shot Language Explanations

Published: January 7, 2026 | arXiv ID: 2601.04157v1

By: Adar Avsian , Christopher Richardson , Anirudh Sundar and more

BigTech Affiliations: Microsoft

Potential Business Impact:

Teaches computers to fix their own mistakes.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Language models have become effective at a wide range of tasks, from math problem solving to open-domain question answering. However, they still make mistakes, and these mistakes are often repeated across related queries. Natural language explanations can help correct these errors, but collecting them at scale may be infeasible, particularly in domains where expert annotators are required. To address this issue, we introduce FLEx ($\textbf{F}$ew-shot $\textbf{L}$anguage $\textbf{Ex}$planations), a method for improving model behavior using a small number of explanatory examples. FLEx selects representative model errors using embedding-based clustering, verifies that the associated explanations correct those errors, and summarizes them into a prompt prefix that is prepended at inference-time. This summary guides the model to avoid similar errors on new inputs, without modifying model weights. We evaluate FLEx on CounterBench, GSM8K, and ReasonIF. We find that FLEx consistently outperforms chain-of-thought (CoT) prompting across all three datasets and reduces up to 83\% of CoT's remaining errors.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
19 pages

Category
Computer Science:
Computation and Language