Score: 1

LLMs Can't Play Hangman: On the Necessity of a Private Working Memory for Language Agents

Published: January 11, 2026 | arXiv ID: 2601.06973v1

By: Davide Baldelli , Ali Parviz , Amal Zouaq and more

Potential Business Impact:

Gives AI a secret notebook to remember things.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

As LLMs move from text completion toward autonomous agents, they remain constrained by the standard chat interface, which lacks private working memory. This raises a fundamental question: can agents reliably perform interactive tasks that depend on hidden state? We define Private State Interactive Tasks (PSITs), which require agents to generate and maintain hidden information while producing consistent public responses. We show theoretically that any agent restricted to the public conversation history cannot simultaneously preserve secrecy and consistency in PSITs, yielding an impossibility theorem. To empirically validate this limitation, we introduce a self-consistency testing protocol that evaluates whether agents can maintain a hidden secret across forked dialogue branches. Standard chat-based LLMs and retrieval-based memory baselines fail this test regardless of scale, demonstrating that semantic retrieval does not enable true state maintenance. To address this, we propose a novel architecture incorporating an explicit private working memory; we demonstrate that this mechanism restores consistency, establishing private state as a necessary component for interactive language agents.

Repos / Data Links

Page Count
32 pages

Category
Computer Science:
Computation and Language