Explainable Rule Application via Structured Prompting: A Neural-Symbolic Approach
By: Albert Sadowski, Jarosław A. Chudziak
Potential Business Impact:
Makes AI follow rules, like a smart lawyer.
Large Language Models (LLMs) excel in complex reasoning tasks but struggle with consistent rule application, exception handling, and explainability, particularly in domains like legal analysis that require both natural language understanding and precise logical inference. This paper introduces a structured prompting framework that decomposes reasoning into three verifiable steps: entity identification, property extraction, and symbolic rule application. By integrating neural and symbolic approaches, our method leverages LLMs' interpretive flexibility while ensuring logical consistency through formal verification. The framework externalizes task definitions, enabling domain experts to refine logical structures without altering the architecture. Evaluated on the LegalBench hearsay determination task, our approach significantly outperformed baselines, with OpenAI o-family models showing substantial improvements - o1 achieving an F1 score of 0.929 and o3-mini reaching 0.867 using structured decomposition with complementary predicates, compared to their few-shot baselines of 0.714 and 0.74 respectively. This hybrid neural-symbolic system offers a promising pathway for transparent and consistent rule-based reasoning, suggesting potential for explainable AI applications in structured legal reasoning tasks.
Similar Papers
Towards Robust Legal Reasoning: Harnessing Logical LLMs in Law
Computers and Society
AI understands legal papers to answer questions.
Understanding LLM Scientific Reasoning through Promptings and Model's Explanation on the Answers
Artificial Intelligence
Makes AI better at solving hard science problems.
A Comparative Study of Neurosymbolic AI Approaches to Interpretable Logical Reasoning
Artificial Intelligence
Makes AI think logically like humans.