Score: 1

ABBEL: LLM Agents Acting through Belief Bottlenecks Expressed in Language

Published: December 23, 2025 | arXiv ID: 2512.20111v1

By: Aly Lidayan , Jakob Bjorner , Satvik Golechha and more

BigTech Affiliations: University of California, Berkeley

Potential Business Impact:

Helps AI remember long tasks with less thinking.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

As the length of sequential decision-making tasks increases, it becomes computationally impractical to keep full interaction histories in context. We introduce a general framework for LLM agents to maintain concise contexts through multi-step interaction: Acting through Belief Bottlenecks Expressed in Language (ABBEL), and methods to further improve ABBEL agents with RL post-training. ABBEL replaces long multi-step interaction history by a belief state, i.e., a natural language summary of what has been discovered about task-relevant unknowns. Under ABBEL, at each step the agent first updates a prior belief with the most recent observation from the environment to form a posterior belief, then uses only the posterior to select an action. We systematically evaluate frontier models under ABBEL across six diverse multi-step environments, finding that ABBEL supports generating interpretable beliefs while maintaining near-constant memory use over interaction steps. However, bottleneck approaches are generally prone to error propagation, which we observe causing inferior performance when compared to the full context setting due to errors in belief updating. Therefore, we train LLMs to generate and act on beliefs within the ABBEL framework via reinforcement learning (RL). We experiment with belief grading, to reward higher quality beliefs, as well as belief length penalties to reward more compressed beliefs. Our experiments demonstrate the ability of RL to improve ABBEL's performance beyond the full context setting, while using less memory than contemporaneous approaches.

Country of Origin
🇺🇸 United States

Page Count
25 pages

Category
Computer Science:
Computation and Language