Systems Explaining Systems: A Framework for Intelligence and Consciousness
By: Sean Niklas Semmler
Potential Business Impact:
Makes computers think and feel like people.
This paper proposes a conceptual framework in which intelligence and consciousness emerge from relational structure rather than from prediction or domain-specific mechanisms. Intelligence is defined as the capacity to form and integrate causal connections between signals, actions, and internal states. Through context enrichment, systems interpret incoming information using learned relational structure that provides essential context in an efficient representation that the raw input itself does not contain, enabling efficient processing under metabolic constraints. Building on this foundation, we introduce the systems-explaining-systems principle, where consciousness emerges when recursive architectures allow higher-order systems to learn and interpret the relational patterns of lower-order systems across time. These interpretations are integrated into a dynamically stabilized meta-state and fed back through context enrichment, transforming internal models from representations of the external world into models of the system's own cognitive processes. The framework reframes predictive processing as an emergent consequence of contextual interpretation rather than explicit forecasting and suggests that recursive multi-system architectures may be necessary for more human-like artificial intelligence.
Similar Papers
Testing the Machine Consciousness Hypothesis
Artificial Intelligence
Makes computers understand themselves by talking.
Artificial Consciousness as Interface Representation
Artificial Intelligence
Tests if computers can feel or think like us.
System 0: Transforming Artificial Intelligence into a Cognitive Extension
Human-Computer Interaction
AI helps us think better, but we must guide it.