Score: 1

Distributional Inverse Reinforcement Learning

Published: October 3, 2025 | arXiv ID: 2510.03013v1

By: Feiyang Wu, Ye Zhao, Anqi Wu

Potential Business Impact:

Learns how to do things by watching experts.

Business Areas:
Simulation Software

We propose a distributional framework for offline Inverse Reinforcement Learning (IRL) that jointly models uncertainty over reward functions and full distributions of returns. Unlike conventional IRL approaches that recover a deterministic reward estimate or match only expected returns, our method captures richer structure in expert behavior, particularly in learning the reward distribution, by minimizing first-order stochastic dominance (FSD) violations and thus integrating distortion risk measures (DRMs) into policy learning, enabling the recovery of both reward distributions and distribution-aware policies. This formulation is well-suited for behavior analysis and risk-aware imitation learning. Empirical results on synthetic benchmarks, real-world neurobehavioral data, and MuJoCo control tasks demonstrate that our method recovers expressive reward representations and achieves state-of-the-art imitation performance.

Country of Origin
🇺🇸 United States

Page Count
20 pages

Category
Computer Science:
Machine Learning (CS)