Dual Computational Horizons: Incompleteness and Unpredictability in Intelligent Systems
By: Abhisek Ganguly
We formalize two independent computational limitations that constrain algorithmic intelligence: formal incompleteness and dynamical unpredictability. The former limits the deductive power of consistent reasoning systems while the later bounds long-term prediction under finite precision. We show that these two extrema together impose structural bounds on an agent's ability to reason about its own predictive capabilities. In particular, an algorithmic agent cannot compute its own maximal prediction horizon generally. This perspective clarifies inherent trade-offs between reasoning, prediction, and self-analysis in intelligent systems.
Similar Papers
Robust AI Security and Alignment: A Sisyphean Endeavor?
Artificial Intelligence
AI can't be perfectly safe or controlled.
Bridging the Gap: Toward Cognitive Autonomy in Artificial Intelligence
Artificial Intelligence
AI learns to fix its own mistakes.
Epistemic Trade-Off: An Analysis of the Operational Breakdown and Ontological Limits of "Certainty-Scope" in AI
Computers and Society
Makes AI safer by understanding its limits.