Current Agents Fail to Leverage World Model as Tool for Foresight
By: Cheng Qian , Emre Can Acikgoz , Bingxuan Li and more
Potential Business Impact:
Agents learn to predict future to make better choices.
Agents built on vision-language models increasingly face tasks that demand anticipating future states rather than relying on short-horizon reasoning. Generative world models offer a promising remedy: agents could use them as external simulators to foresee outcomes before acting. This paper empirically examines whether current agents can leverage such world models as tools to enhance their cognition. Across diverse agentic and visual question answering tasks, we observe that some agents rarely invoke simulation (fewer than 1%), frequently misuse predicted rollouts (approximately 15%), and often exhibit inconsistent or even degraded performance (up to 5%) when simulation is available or enforced. Attribution analysis further indicates that the primary bottleneck lies in the agents' capacity to decide when to simulate, how to interpret predicted outcomes, and how to integrate foresight into downstream reasoning. These findings underscore the need for mechanisms that foster calibrated, strategic interaction with world models, paving the way toward more reliable anticipatory cognition in future agent systems.
Similar Papers
Current Agents Fail to Leverage World Model as Tool for Foresight
Artificial Intelligence
Agents learn to predict future outcomes for better decisions.
Agent2World: Learning to Generate Symbolic World Models via Adaptive Multi-Agent Feedback
Artificial Intelligence
Teaches computers how the world works.
Simulating the Visual World with Artificial Intelligence: A Roadmap
Artificial Intelligence
Creates realistic videos that act like real worlds.