A Modular Dataset to Demonstrate LLM Abstraction Capability
By: Adam Atanas, Kai Liu
Potential Business Impact:
Helps AI understand its own thinking better.
Large language models (LLMs) exhibit impressive capabilities but struggle with reasoning errors due to hallucinations and flawed logic. To investigate their internal representations of reasoning, we introduce ArrangementPuzzle, a novel puzzle dataset with structured solutions and automated stepwise correctness verification. We trained a classifier model on LLM activations on this dataset and found that it achieved over 80% accuracy in predicting reasoning correctness, implying that LLMs internally distinguish between correct and incorrect reasoning steps, with the strongest representations in middle-late Transformer layers. Further analysis reveals that LLMs encode abstract reasoning concepts within the middle activation layers of the transformer architecture, distinguishing logical from semantic equivalence. These findings provide insights into LLM reasoning mechanisms and contribute to improving AI reliability and interpretability, thereby offering the possibility to manipulate and refine LLM reasoning.
Similar Papers
Reasoning Models Reason Well, Until They Don't
Artificial Intelligence
Makes smart computers better at solving hard problems.
Reasoning Capabilities and Invariability of Large Language Models
Computation and Language
Tests if computers can think logically.
Can Large Language Models Learn Formal Logic? A Data-Driven Training and Evaluation Framework
Machine Learning (CS)
Teaches computers to prove math problems correctly.