Filling the Gap: Is Commonsense Knowledge Generation useful for Natural Language Inference?
By: Chathuri Jayaweera, Brianna Yanqui, Bonnie Dorr
Potential Business Impact:
Helps computers understand if one sentence follows another.
Natural Language Inference (NLI) is the task of determining the semantic entailment of a premise for a given hypothesis. The task aims to develop systems that emulate natural human inferential processes where commonsense knowledge plays a major role. However, existing commonsense resources lack sufficient coverage for a variety of premise-hypothesis pairs. This study explores the potential of Large Language Models as commonsense knowledge generators for NLI along two key dimensions: their reliability in generating such knowledge and the impact of that knowledge on prediction accuracy. We adapt and modify existing metrics to assess LLM factuality and consistency in generating in this context. While explicitly incorporating commonsense knowledge does not consistently improve overall results, it effectively helps distinguish entailing instances and moderately improves distinguishing contradictory and neutral inferences.
Similar Papers
Exploring the Influence of Relevant Knowledge for Natural Language Generation Interpretability
Computation and Language
Makes computers write sentences that make sense.
NLKI: A lightweight Natural Language Knowledge Integration Framework for Improving Small VLMs in Commonsense VQA Tasks
Computation and Language
Helps computers understand pictures by adding common sense.
NLKI: A lightweight Natural Language Knowledge Integration Framework for Improving Small VLMs in Commonsense VQA Tasks
Computation and Language
Helps small AI understand pictures better by adding knowledge.