FLEx: Language Modeling with Few-shot Language Explanations
By: Adar Avsian , Christopher Richardson , Anirudh Sundar and more
Potential Business Impact:
Teaches computers to fix their own mistakes.
Language models have become effective at a wide range of tasks, from math problem solving to open-domain question answering. However, they still make mistakes, and these mistakes are often repeated across related queries. Natural language explanations can help correct these errors, but collecting them at scale may be infeasible, particularly in domains where expert annotators are required. To address this issue, we introduce FLEx ($\textbf{F}$ew-shot $\textbf{L}$anguage $\textbf{Ex}$planations), a method for improving model behavior using a small number of explanatory examples. FLEx selects representative model errors using embedding-based clustering, verifies that the associated explanations correct those errors, and summarizes them into a prompt prefix that is prepended at inference-time. This summary guides the model to avoid similar errors on new inputs, without modifying model weights. We evaluate FLEx on CounterBench, GSM8K, and ReasonIF. We find that FLEx consistently outperforms chain-of-thought (CoT) prompting across all three datasets and reduces up to 83\% of CoT's remaining errors.
Similar Papers
T-FIX: Text-Based Explanations with Features Interpretable to eXperts
Computation and Language
Makes AI give smart answers experts trust.
REFLEX: Self-Refining Explainable Fact-Checking via Disentangling Truth into Style and Substance
Computation and Language
Helps computers check if news is true.
REFLEX: Self-Refining Explainable Fact-Checking via Disentangling Truth into Style and Substance
Computation and Language
Helps computers find truth without looking online.