A Disproof of Large Language Model Consciousness: The Necessity of Continual Learning for Consciousness
By: Erik Hoel
The requirements for a falsifiable and non-trivial theory of consciousness significantly constrain such theories. Specifically, recent research on the Unfolding Argument and the Substitution Argument has given us formal tools to analyze requirements for a theory of consciousness. I show via a new Proximity Argument that these requirements especially constrain the potential consciousness of contemporary Large Language Models (LLMs) because of their proximity to systems that are equivalent to LLMs in terms of input/output function; yet, for these functionally equivalent systems, there cannot be any non-trivial theory of consciousness that judges them conscious. This forms the basis of a disproof of contemporary LLM consciousness. I then show a positive result, which is that theories of consciousness based on (or requiring) continual learning do satisfy the stringent formal constraints for a theory of consciousness in humans. Intriguingly, this work supports a hypothesis: If continual learning is linked to consciousness in humans, the current limitations of LLMs (which do not continually learn) are intimately tied to their lack of consciousness.
Similar Papers
Testing the Machine Consciousness Hypothesis
Artificial Intelligence
Makes computers understand themselves by talking.
Model of human cognition
Artificial Intelligence
Builds smarter, cheaper AI that we can understand.
Modeling Layered Consciousness with Multi-Agent Large Language Models
Computation and Language
Makes AI understand feelings and act more human.