Score: 1

Constrained Linear Thompson Sampling

Published: March 3, 2025 | arXiv ID: 2503.02043v2

By: Aditya Gangrade, Venkatesh Saligrama

Potential Business Impact:

Helps computers learn safely and faster.

Business Areas:
A/B Testing Data and Analytics

We study safe linear bandits (SLBs), where an agent selects actions from a convex set to maximize an unknown linear objective subject to unknown linear constraints in each round. Existing methods for SLBs provide strong regret guarantees, but require solving expensive optimization problems (e.g., second-order cones, NP hard programs). To address this, we propose Constrained Linear Thompson Sampling (COLTS), a sampling-based framework that selects actions by solving perturbed linear programs, which significantly reduces computational costs while matching the regret and risk of prior methods. We develop two main variants: S-COLTS, which ensures zero risk and $\widetilde{O}(\sqrt{d^3 T})$ regret given a safe action, and R-COLTS, which achieves $\widetilde{O}(\sqrt{d^3 T})$ regret and risk with no instance information. In simulations, these methods match or outperform state of the art SLB approaches while substantially improving scalability. On the technical front, we introduce a novel coupled noise design that ensures frequent `local optimism' about the true optimum, and a scaling-based analysis to handle the per-round variability of constraints.

Country of Origin
🇺🇸 United States

Page Count
43 pages

Category
Computer Science:
Machine Learning (CS)