Score: 0

Stochastic Multi-Objective Multi-Armed Bandits: Regret Definition and Algorithm

Published: June 16, 2025 | arXiv ID: 2506.13125v1

By: Mansoor Davoodi, Setareh Maghsudi

Potential Business Impact:

Helps computers choose best options with many goals.

Business Areas:
A/B Testing Data and Analytics

Multi-armed bandit (MAB) problems are widely applied to online optimization tasks that require balancing exploration and exploitation. In practical scenarios, these tasks often involve multiple conflicting objectives, giving rise to multi-objective multi-armed bandits (MO-MAB). Existing MO-MAB approaches predominantly rely on the Pareto regret metric introduced in \cite{drugan2013designing}. However, this metric has notable limitations, particularly in accounting for all Pareto-optimal arms simultaneously. To address these challenges, we propose a novel and comprehensive regret metric that ensures balanced performance across conflicting objectives. Additionally, we introduce the concept of \textit{Efficient Pareto-Optimal} arms, which are specifically designed for online optimization. Based on our new metric, we develop a two-phase MO-MAB algorithm that achieves sublinear regret for both Pareto-optimal and efficient Pareto-optimal arms.

Country of Origin
🇩🇪 Germany

Page Count
21 pages

Category
Computer Science:
Machine Learning (CS)