Score: 1

Exploring State Tracking Capabilities of Large Language Models

Published: November 13, 2025 | arXiv ID: 2511.10457v1

By: Kiamehr Rezaee, Jose Camacho-Collados, Mohammad Taher Pilehvar

Potential Business Impact:

Helps computers remember many things at once.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large Language Models (LLMs) have demonstrated impressive capabilities in solving complex tasks, including those requiring a certain level of reasoning. In this paper, we focus on state tracking, a problem where models need to keep track of the state governing a number of entities. To isolate the state tracking component from other factors, we propose a benchmark based on three well-defined state tracking tasks and analyse the performance of LLMs in different scenarios. The results indicate that the recent generation of LLMs (specifically, GPT-4 and Llama3) are capable of tracking state, especially when integrated with mechanisms such as Chain of Thought. However, models from the former generation, while understanding the task and being able to solve it at the initial stages, often fail at this task after a certain number of steps.

Country of Origin
🇬🇧 United Kingdom

Repos / Data Links

Page Count
17 pages

Category
Computer Science:
Computation and Language