Score: 0

Optimal control of the future via prospective learning with control

Published: November 11, 2025 | arXiv ID: 2511.08717v2

By: Yuxin Bai , Aranyak Acharyya , Ashwin De Silva and more

Potential Business Impact:

AI learns to control things in changing worlds.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Optimal control of the future is the next frontier for AI. Current approaches to this problem are typically rooted in either reinforcement learning (RL). While powerful, this learning framework is mathematically distinct from supervised learning, which has been the main workhorse for the recent achievements in AI. Moreover, RL typically operates in a stationary environment with episodic resets, limiting its utility to more realistic settings. Here, we extend supervised learning to address learning to control in non-stationary, reset-free environments. Using this framework, called ''Prospective Learning with Control (PL+C)'', we prove that under certain fairly general assumptions, empirical risk minimization (ERM) asymptotically achieves the Bayes optimal policy. We then consider a specific instance of prospective learning with control, foraging -- which is a canonical task for any mobile agent -- be it natural or artificial. We illustrate that modern RL algorithms fail to learn in these non-stationary reset-free environments, and even with modifications, they are orders of magnitude less efficient than our prospective foraging agents.

Page Count
17 pages

Category
Statistics:
Machine Learning (Stat)