Score: 0

FairDICE: Fairness-Driven Offline Multi-Objective Reinforcement Learning

Published: June 9, 2025 | arXiv ID: 2506.08062v1

By: Woosung Kim , Jinho Lee , Jongmin Lee and more

Potential Business Impact:

Helps computers make fair decisions with many goals.

Business Areas:
MMO Games Gaming

Multi-objective reinforcement learning (MORL) aims to optimize policies in the presence of conflicting objectives, where linear scalarization is commonly used to reduce vector-valued returns into scalar signals. While effective for certain preferences, this approach cannot capture fairness-oriented goals such as Nash social welfare or max-min fairness, which require nonlinear and non-additive trade-offs. Although several online algorithms have been proposed for specific fairness objectives, a unified approach for optimizing nonlinear welfare criteria in the offline setting-where learning must proceed from a fixed dataset-remains unexplored. In this work, we present FairDICE, the first offline MORL framework that directly optimizes nonlinear welfare objective. FairDICE leverages distribution correction estimation to jointly account for welfare maximization and distributional regularization, enabling stable and sample-efficient learning without requiring explicit preference weights or exhaustive weight search. Across multiple offline benchmarks, FairDICE demonstrates strong fairness-aware performance compared to existing baselines.

Country of Origin
🇰🇷 Korea, Republic of

Page Count
19 pages

Category
Computer Science:
Machine Learning (CS)