Score: 0

Threshold-Based Optimal Arm Selection in Monotonic Bandits: Regret Lower Bounds and Algorithms

Published: September 2, 2025 | arXiv ID: 2509.02119v1

By: Chanakya Varude , Jay Chaudhary , Siddharth Kaushik and more

Potential Business Impact:

Finds the best option near a target.

Business Areas:
A/B Testing Data and Analytics

In multi-armed bandit problems, the typical goal is to identify the arm with the highest reward. This paper explores a threshold-based bandit problem, aiming to select an arm based on its relation to a prescribed threshold \(\tau \). We study variants where the optimal arm is the first above \(\tau\), the \(k^{th}\) arm above or below it, or the closest to it, under a monotonic structure of arm means. We derive asymptotic regret lower bounds, showing dependence only on arms adjacent to \(\tau\). Motivated by applications in communication networks (CQI allocation), clinical dosing, energy management, recommendation systems, and more. We propose algorithms with optimality validated through Monte Carlo simulations. Our work extends classical bandit theory with threshold constraints for efficient decision-making.

Country of Origin
🇮🇳 India

Page Count
9 pages

Category
Computer Science:
Machine Learning (CS)