Neutralizing Bias in LLM Reasoning using Entailment Graphs
By: Liang Cheng , Tianyi Li , Zhaowei Wang and more
Potential Business Impact:
Makes AI understand stories better, not just memorize.
LLMs are often claimed to be capable of Natural Language Inference (NLI), which is widely regarded as a cornerstone of more complex forms of reasoning. However, recent works show that LLMs still suffer from hallucinations in NLI due to attestation bias, where LLMs overly rely on propositional memory to build shortcuts. To solve the issue, we design an unsupervised framework to construct counterfactual reasoning data and fine-tune LLMs to reduce attestation bias. To measure bias reduction, we build bias-adversarial variants of NLI datasets with randomly replaced predicates in premises while keeping hypotheses unchanged. Extensive evaluations show that our framework can significantly reduce hallucinations from attestation bias. Then, we further evaluate LLMs fine-tuned with our framework on original NLI datasets and their bias-neutralized versions, where original entities are replaced with randomly sampled ones. Extensive results show that our framework consistently improves inferential performance on both original and bias-neutralized NLI datasets.
Similar Papers
CLATTER: Comprehensive Entailment Reasoning for Hallucination Detection
Computation and Language
Helps AI tell if it's making things up.
Addressing Logical Fallacies In Scientific Reasoning From Large Language Models: Towards a Dual-Inference Training Framework
Artificial Intelligence
Teaches AI to spot wrong ideas, not just right ones.
Is LLMs Hallucination Usable? LLM-based Negative Reasoning for Fake News Detection
Computation and Language
Helps computers spot fake news by finding wrong ideas.