Understanding Syllogistic Reasoning in LLMs from Formal and Natural Language Perspectives
By: Aheli Poddar, Saptarshi Sahoo, Sujata Ghosh
Potential Business Impact:
Makes computers think more like people.
We study syllogistic reasoning in LLMs from the logical and natural language perspectives. In process, we explore fundamental reasoning capabilities of the LLMs and the direction this research is moving forward. To aid in our studies, we use 14 large language models and investigate their syllogistic reasoning capabilities in terms of symbolic inferences as well as natural language understanding. Even though this reasoning mechanism is not a uniform emergent property across LLMs, the perfect symbolic performances in certain models make us wonder whether LLMs are becoming more and more formal reasoning mechanisms, rather than making explicit the nuances of human reasoning.
Similar Papers
Investigating Language Model Capabilities to Represent and Process Formal Knowledge: A Preliminary Study to Assist Ontology Engineering
Artificial Intelligence
Helps small computers reason better with logic.
Hybrid Models for Natural Language Reasoning: The Case of Syllogistic Logic
Computation and Language
Helps computers reason logically by combining two skills.
On the Notion that Language Models Reason
Computation and Language
Computers learn by copying patterns, not thinking.