The Missing Layer of AGI: From Pattern Alchemy to Coordination Physics
By: Edward Y. Chang
Influential critiques argue that Large Language Models (LLMs) are a dead end for AGI: "mere pattern matchers" structurally incapable of reasoning or planning. We argue this conclusion misidentifies the bottleneck: it confuses the ocean with the net. Pattern repositories are the necessary System-1 substrate; the missing component is a System-2 coordination layer that selects, constrains, and binds these patterns. We formalize this layer via UCCT, a theory of semantic anchoring that models reasoning as a phase transition governed by effective support (rho_d), representational mismatch (d_r), and an adaptive anchoring budget (gamma log k). Under this lens, ungrounded generation is simply an unbaited retrieval of the substrate's maximum likelihood prior, while "reasoning" emerges when anchors shift the posterior toward goal-directed constraints. We translate UCCT into architecture with MACI, a coordination stack that implements baiting (behavior-modulated debate), filtering (Socratic judging), and persistence (transactional memory). By reframing common objections as testable coordination failures, we argue that the path to AGI runs through LLMs, not around them.
Similar Papers
Applying Cognitive Design Patterns to General LLM Agents
Artificial Intelligence
AI learns how to think like humans.
LAG: Logic-Augmented Generation from a Cartesian Perspective
Computation and Language
Helps computers answer tricky questions correctly.
Agentic AI Reasoning for Mobile Edge General Intelligence: Fundamentals, Approaches, and Directions
Artificial Intelligence
Makes smart AI work on phones without internet.