Inverse Reinforcement Learning Using Just Classification and a Few Regressions
By: Lars van der Laan, Nathan Kallus, Aurélien Bibaut
Potential Business Impact:
Teaches robots to learn by watching.
Inverse reinforcement learning (IRL) aims to explain observed behavior by uncovering an underlying reward. In the maximum-entropy or Gumbel-shocks-to-reward frameworks, this amounts to fitting a reward function and a soft value function that together satisfy the soft Bellman consistency condition and maximize the likelihood of observed actions. While this perspective has had enormous impact in imitation learning for robotics and understanding dynamic choices in economics, practical learning algorithms often involve delicate inner-loop optimization, repeated dynamic programming, or adversarial training, all of which complicate the use of modern, highly expressive function approximators like neural nets and boosting. We revisit softmax IRL and show that the population maximum-likelihood solution is characterized by a linear fixed-point equation involving the behavior policy. This observation reduces IRL to two off-the-shelf supervised learning problems: probabilistic classification to estimate the behavior policy, and iterative regression to solve the fixed point. The resulting method is simple and modular across function approximation classes and algorithms. We provide a precise characterization of the optimal solution, a generic oracle-based algorithm, finite-sample error bounds, and empirical results showing competitive or superior performance to MaxEnt IRL.
Similar Papers
Recursive Deep Inverse Reinforcement Learning
Machine Learning (CS)
Figures out bad guys' plans fast.
Symmetry-Guided Multi-Agent Inverse Reinforcement Learnin
Robotics
Robots learn better with less practice.
Symmetry-Guided Multi-Agent Inverse Reinforcement Learning
Robotics
Robots learn better with fewer examples.