Exploring State Tracking Capabilities of Large Language Models
By: Kiamehr Rezaee, Jose Camacho-Collados, Mohammad Taher Pilehvar
Potential Business Impact:
Helps computers remember many things at once.
Large Language Models (LLMs) have demonstrated impressive capabilities in solving complex tasks, including those requiring a certain level of reasoning. In this paper, we focus on state tracking, a problem where models need to keep track of the state governing a number of entities. To isolate the state tracking component from other factors, we propose a benchmark based on three well-defined state tracking tasks and analyse the performance of LLMs in different scenarios. The results indicate that the recent generation of LLMs (specifically, GPT-4 and Llama3) are capable of tracking state, especially when integrated with mechanisms such as Chain of Thought. However, models from the former generation, while understanding the task and being able to solve it at the initial stages, often fail at this task after a certain number of steps.
Similar Papers
(How) Do Language Models Track State?
Computation and Language
Computers learn to remember and follow changing instructions.
On the Limits of Innate Planning in Large Language Models
Artificial Intelligence
Computers struggle to solve puzzles without help.
Tracking World States with Language Models: State-Based Evaluation Using Chess
Artificial Intelligence
Tests if computers understand game rules deeply.