Question Answering with LLMs and Learning from Answer Sets
By: Manuel Borroto , Katie Gallagher , Antonio Ielo and more
Potential Business Impact:
Helps computers answer questions by learning rules.
Large Language Models (LLMs) excel at understanding natural language but struggle with explicit commonsense reasoning. A recent trend of research suggests that the combination of LLM with robust symbolic reasoning systems can overcome this problem on story-based question answering tasks. In this setting, existing approaches typically depend on human expertise to manually craft the symbolic component. We argue, however, that this component can also be automatically learned from examples. In this work, we introduce LLM2LAS, a hybrid system that effectively combines the natural language understanding capabilities of LLMs, the rule induction power of the Learning from Answer Sets (LAS) system ILASP, and the formal reasoning strengths of Answer Set Programming (ASP). LLMs are used to extract semantic structures from text, which ILASP then transforms into interpretable logic rules. These rules allow an ASP solver to perform precise and consistent reasoning, enabling correct answers to previously unseen questions. Empirical results outline the strengths and weaknesses of our automatic approach for learning and reasoning in a story-based question answering benchmark.
Similar Papers
GOFAI meets Generative AI: Development of Expert Systems by means of Large Language Models
Artificial Intelligence
Makes AI more truthful and trustworthy.
LLM-Driven Personalized Answer Generation and Evaluation
Computers and Society
Helps online students get answers just for them.
Method-Based Reasoning for Large Language Models: Extraction, Reuse, and Continuous Improvement
Computational Engineering, Finance, and Science
Teaches computers to solve new problems logically.