On the Limits of Hierarchically Embedded Logic in Classical Neural Networks
By: Bill Cochran
Potential Business Impact:
AI can't do complex thinking, like counting.
We propose a formal model of reasoning limitations in large neural net models for language, grounded in the depth of their neural architecture. By treating neural networks as linear operators over logic predicate space we show that each layer can encode at most one additional level of logical reasoning. We prove that a neural network of depth a particular depth cannot faithfully represent predicates in a one higher order logic, such as simple counting over complex predicates, implying a strict upper bound on logical expressiveness. This structure induces a nontrivial null space during tokenization and embedding, excluding higher-order predicates from representability. Our framework offers a natural explanation for phenomena such as hallucination, repetition, and limited planning, while also providing a foundation for understanding how approximations to higher-order logic may emerge. These results motivate architectural extensions and interpretability strategies in future development of language models.
Similar Papers
Standard Neural Computation Alone Is Insufficient for Logical Intelligence
Artificial Intelligence
AI learns to think logically, not just guess.
Quantifying The Limits of AI Reasoning: Systematic Neural Network Representations of Algorithms
Machine Learning (CS)
Computers can now do any kind of thinking.
Quantifying The Limits of AI Reasoning: Systematic Neural Network Representations of Algorithms
Machine Learning (CS)
AI can do any thinking computers can do.