Optimal control of the future via prospective learning with control
By: Yuxin Bai , Aranyak Acharyya , Ashwin De Silva and more
Potential Business Impact:
AI learns to control things in changing worlds.
Optimal control of the future is the next frontier for AI. Current approaches to this problem are typically rooted in either reinforcement learning (RL). While powerful, this learning framework is mathematically distinct from supervised learning, which has been the main workhorse for the recent achievements in AI. Moreover, RL typically operates in a stationary environment with episodic resets, limiting its utility to more realistic settings. Here, we extend supervised learning to address learning to control in non-stationary, reset-free environments. Using this framework, called ''Prospective Learning with Control (PL+C)'', we prove that under certain fairly general assumptions, empirical risk minimization (ERM) asymptotically achieves the Bayes optimal policy. We then consider a specific instance of prospective learning with control, foraging -- which is a canonical task for any mobile agent -- be it natural or artificial. We illustrate that modern RL algorithms fail to learn in these non-stationary reset-free environments, and even with modifications, they are orders of magnitude less efficient than our prospective foraging agents.
Similar Papers
Optimal Control of the Future via Prospective Foraging
Machine Learning (Stat)
AI learns to make better decisions in changing worlds.
A Review of Learning-Based Motion Planning: Toward a Data-Driven Optimal Control Approach
Robotics
Makes self-driving cars safer and smarter.
Continual Reinforcement Learning for Cyber-Physical Systems: Lessons Learned and Open Challenges
Machine Learning (CS)
Teaches self-driving cars to learn new parking spots.