Score: 2

Generics and Default Reasoning in Large Language Models

Published: August 19, 2025 | arXiv ID: 2508.13718v1

By: James Ravi Kirkpatrick, Rachel Katharine Sterken

Potential Business Impact:

Computers learn to understand exceptions like "birds fly."

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

This paper evaluates the capabilities of 28 large language models (LLMs) to reason with 20 defeasible reasoning patterns involving generic generalizations (e.g., 'Birds fly', 'Ravens are black') central to non-monotonic logic. Generics are of special interest to linguists, philosophers, logicians, and cognitive scientists because of their complex exception-permitting behaviour and their centrality to default reasoning, cognition, and concept acquisition. We find that while several frontier models handle many default reasoning problems well, performance varies widely across models and prompting styles. Few-shot prompting modestly improves performance for some models, but chain-of-thought (CoT) prompting often leads to serious performance degradation (mean accuracy drop -11.14%, SD 15.74% in models performing above 75% accuracy in zero-shot condition, temperature 0). Most models either struggle to distinguish between defeasible and deductive inference or misinterpret generics as universal statements. These findings underscore both the promise and limits of current LLMs for default reasoning.

Country of Origin
🇬🇧 🇭🇰 Hong Kong, United Kingdom

Repos / Data Links

Page Count
33 pages

Category
Computer Science:
Computation and Language