Artificial Consciousness as Interface Representation
By: Robert Prentner
Potential Business Impact:
Tests if computers can feel or think like us.
Whether artificial intelligence (AI) systems can possess consciousness is a contentious question because of the inherent challenges of defining and operationalizing subjective experience. This paper proposes a framework to reframe the question of artificial consciousness into empirically tractable tests. We introduce three evaluative criteria - S (subjective-linguistic), L (latent-emergent), and P (phenomenological-structural) - collectively termed SLP-tests, which assess whether an AI system instantiates interface representations that facilitate consciousness-like properties. Drawing on category theory, we model interface representations as mappings between relational substrates (RS) and observable behaviors, akin to specific types of abstraction layers. The SLP-tests collectively operationalize subjective experience not as an intrinsic property of physical systems but as a functional interface to a relational entity.
Similar Papers
Testing the Machine Consciousness Hypothesis
Artificial Intelligence
Makes computers understand themselves by talking.
Consciousness in Artificial Intelligence? A Framework for Classifying Objections and Constraints
Artificial Intelligence
Organizes ideas about AI consciousness.
Humanoid Artificial Consciousness Designed with Large Language Model Based on Psychoanalysis and Personality Theory
Artificial Intelligence
Makes AI think and act more like people.