Generics and Default Reasoning in Large Language Models
By: James Ravi Kirkpatrick, Rachel Katharine Sterken
Potential Business Impact:
Computers learn to understand exceptions like "birds fly."
This paper evaluates the capabilities of 28 large language models (LLMs) to reason with 20 defeasible reasoning patterns involving generic generalizations (e.g., 'Birds fly', 'Ravens are black') central to non-monotonic logic. Generics are of special interest to linguists, philosophers, logicians, and cognitive scientists because of their complex exception-permitting behaviour and their centrality to default reasoning, cognition, and concept acquisition. We find that while several frontier models handle many default reasoning problems well, performance varies widely across models and prompting styles. Few-shot prompting modestly improves performance for some models, but chain-of-thought (CoT) prompting often leads to serious performance degradation (mean accuracy drop -11.14%, SD 15.74% in models performing above 75% accuracy in zero-shot condition, temperature 0). Most models either struggle to distinguish between defeasible and deductive inference or misinterpret generics as universal statements. These findings underscore both the promise and limits of current LLMs for default reasoning.
Similar Papers
Reasoning Capabilities and Invariability of Large Language Models
Computation and Language
Tests if computers can think logically.
Navigating Semantic Relations: Challenges for Language Models in Abstract Common-Sense Reasoning
Computation and Language
Helps computers understand tricky ideas better.
A Survey on Large Language Models for Mathematical Reasoning
Artificial Intelligence
Helps computers solve math problems like a person.