Toward Agents That Reason About Their Computation
By: Adrian Orenstein , Jessica Chen , Gwyneth Anne Delos Santos and more
Potential Business Impact:
Makes smart computer players use less energy.
While reinforcement learning agents can achieve superhuman performance in many complex tasks, they typically do not become more computationally efficient as they improve. In contrast, humans gradually require less cognitive effort as they become more proficient at a task. If agents could reason about their compute as they learn, could they similarly reduce their computation footprint? If they could, we could have more energy efficient agents or free up compute cycles for other processes like planning. In this paper, we experiment with showing agents the cost of their computation and giving them the ability to control when they use compute. We conduct our experiments on the Arcade Learning Environment, and our results demonstrate that with the same training compute budget, agents that reason about their compute perform better on 75% of games. Furthermore, these agents use three times less compute on average. We analyze individual games and show where agents gain these efficiencies.
Similar Papers
AI Agents as Universal Task Solvers
Artificial Intelligence
AI learns faster by understanding task structure.
Demystifying Reinforcement Learning in Agentic Reasoning
Computation and Language
Teaches computers to think better and solve harder problems.
e1: Learning Adaptive Control of Reasoning Effort
Artificial Intelligence
AI thinks smarter, faster, and cheaper.