Cascading Bandits With Feedback
By: R Sri Prakash, Nikhil Karamchandani, Sharayu Moharir
Potential Business Impact:
Helps smart devices choose the best AI model.
Motivated by the challenges of edge inference, we study a variant of the cascade bandit model in which each arm corresponds to an inference model with an associated accuracy and error probability. We analyse four decision-making policies-Explore-then-Commit, Action Elimination, Lower Confidence Bound (LCB), and Thompson Sampling-and provide sharp theoretical regret guarantees for each. Unlike in classical bandit settings, Explore-then-Commit and Action Elimination incur suboptimal regret because they commit to a fixed ordering after the exploration phase, limiting their ability to adapt. In contrast, LCB and Thompson Sampling continuously update their decisions based on observed feedback, achieving constant O(1) regret. Simulations corroborate these theoretical findings, highlighting the crucial role of adaptivity for efficient edge inference under uncertainty.
Similar Papers
Optimal Arm Elimination Algorithms for Combinatorial Bandits
Machine Learning (CS)
Helps choose the best options faster.
Incentivized Lipschitz Bandits
Machine Learning (CS)
Helps robots learn faster with smart rewards.
Multi-User Contextual Cascading Bandits for Personalized Recommendation
Machine Learning (CS)
Shows ads better to many people at once.