The Seeds of Scheming: Weakness of Will in the Building Blocks of Agentic Systems
By: Robert Yang
Large language models display a peculiar form of inconsistency: they "know" the correct answer but fail to act on it. In human philosophy, this tension between global judgment and local impulse is called akrasia, or weakness of will. We propose akrasia as a foundational concept for analyzing inconsistency and goal drift in agentic AI systems. To operationalize it, we introduce a preliminary version of the Akrasia Benchmark, currently a structured set of prompting conditions (Baseline [B], Synonym [S], Temporal [T], and Temptation [X]) that measures when a model's local response contradicts its own prior commitments. The benchmark enables quantitative comparison of "self-control" across model families, decoding strategies, and temptation types. Beyond single-model evaluation, we outline how micro-level akrasia may compound into macro-level instability in multi-agent systems that may be interpreted as "scheming" or deliberate misalignment. By reframing inconsistency as weakness of will, this work connects agentic behavior to classical theories of agency and provides an empirical bridge between philosophy, psychology, and the emerging science of agentic AI.
Similar Papers
Epistemic Scarcity: The Economics of Unresolvable Unknowns
General Economics
AI can't truly run economies or make fair rules.
A Tutorial on Cognitive Biases in Agentic AI-Driven 6G Autonomous Networks
Networking and Internet Architecture
AI learns to run networks without human mistakes.
A Tutorial on Cognitive Biases in Agentic AI-Driven 6G Autonomous Networks
Networking and Internet Architecture
AI learns to fix network problems without human help.