Non-Iterative Symbolic-Aided Chain-of-Thought for Logical Reasoning
By: Phuong Minh Nguyen, Tien Huu Dang, Naoya Inoue
Potential Business Impact:
Helps computers think through problems better.
This work introduces Symbolic-Aided Chain-of-Thought (CoT), an improved approach to standard CoT, for logical reasoning in large language models (LLMs). The key idea is to integrate lightweight symbolic representations into few-shot prompts, structuring the inference steps with a consistent strategy to make reasoning patterns more explicit within a non-iterative reasoning process. By incorporating these symbolic structures, our method preserves the generalizability of standard prompting techniques while enhancing the transparency, interpretability, and analyzability of LLM logical reasoning. Extensive experiments on four well-known logical reasoning benchmarks -- ProofWriter, FOLIO, ProntoQA, and LogicalDeduction, which cover diverse reasoning scenarios -- demonstrate the effectiveness of the proposed approach, particularly in complex reasoning tasks that require navigating multiple constraints or rules. Notably, Symbolic-Aided CoT consistently improves LLMs' reasoning capabilities across various model sizes and significantly outperforms conventional CoT on three out of four datasets, ProofWriter, ProntoQA, and LogicalDeduction.
Similar Papers
VeriCoT: Neuro-symbolic Chain-of-Thought Validation via Logical Consistency Checks
Artificial Intelligence
Checks AI's thinking to make sure it's right.
Latent Chain-of-Thought for Visual Reasoning
Artificial Intelligence
Makes AI think step-by-step better for new problems.
Sketch-of-Thought: Efficient LLM Reasoning with Adaptive Cognitive-Inspired Sketching
Computation and Language
Makes smart computers think faster, using fewer words.