FairDICE: Fairness-Driven Offline Multi-Objective Reinforcement Learning
By: Woosung Kim , Jinho Lee , Jongmin Lee and more
Potential Business Impact:
Helps computers make fair decisions with many goals.
Multi-objective reinforcement learning (MORL) aims to optimize policies in the presence of conflicting objectives, where linear scalarization is commonly used to reduce vector-valued returns into scalar signals. While effective for certain preferences, this approach cannot capture fairness-oriented goals such as Nash social welfare or max-min fairness, which require nonlinear and non-additive trade-offs. Although several online algorithms have been proposed for specific fairness objectives, a unified approach for optimizing nonlinear welfare criteria in the offline setting-where learning must proceed from a fixed dataset-remains unexplored. In this work, we present FairDICE, the first offline MORL framework that directly optimizes nonlinear welfare objective. FairDICE leverages distribution correction estimation to jointly account for welfare maximization and distributional regularization, enabling stable and sample-efficient learning without requiring explicit preference weights or exhaustive weight search. Across multiple offline benchmarks, FairDICE demonstrates strong fairness-aware performance compared to existing baselines.
Similar Papers
Interpretability by Design for Efficient Multi-Objective Reinforcement Learning
Artificial Intelligence
Finds best ways to balance many goals.
Benchmarking Offline Multi-Objective Reinforcement Learning in Critical Care
Machine Learning (CS)
Helps doctors make better patient care choices.
MOORL: A Framework for Integrating Offline-Online Reinforcement Learning
Machine Learning (CS)
Teaches robots to learn from past mistakes.