Semantic Attractors and the Emergence of Meaning: Towards a Teleological Model of AGI
By: Hans-Joachim Rudolph
Potential Business Impact:
Makes computers understand words like humans do.
This essay develops a theoretical framework for a semantic Artificial General Intelligence (AGI) based on the notion of semantic attractors in complex-valued meaning spaces. Departing from current transformer-based language models, which operate on statistical next-token prediction, we explore a model in which meaning is not inferred probabilistically but formed through recursive tensorial transformation. Using cyclic operations involving the imaginary unit \emph{i}, we describe a rotational semantic structure capable of modeling irony, homonymy, and ambiguity. At the center of this model, however, is a semantic attractor -- a teleological operator that, unlike statistical computation, acts as an intentional agent (Microvitum), guiding meaning toward stability, clarity, and expressive depth. Conceived in terms of gradient flows, tensor deformations, and iterative matrix dynamics, the attractor offers a model of semantic transformation that is not only mathematically suggestive, but also philosophically significant. We argue that true meaning emerges not from simulation, but from recursive convergence toward semantic coherence, and that this requires a fundamentally new kind of cognitive architecture -- one designed to shape language, not just predict it.
Similar Papers
AGI-Driven Generative Semantic Communications: Principles and Practices
Artificial Intelligence
Lets computers talk like humans, saving energy.
The Missing Layer of AGI: From Pattern Alchemy to Coordination Physics
Artificial Intelligence
Makes AI think and plan like humans.
Dynamics of Agentic Loops in Large Language Models: A Geometric Theory of Trajectories
Machine Learning (CS)
Controls how AI writing loops behave and change.