Thompson Sampling via Fine-Tuning of LLMs
By: Nicolas Menet , Aleksandar Terzić , Andreas Krause and more
Potential Business Impact:
Teaches computers to find best answers faster.
Bayesian optimization in large unstructured discrete spaces is often hindered by the computational cost of maximizing acquisition functions due to the absence of gradients. We propose a scalable alternative based on Thompson sampling that eliminates the need for acquisition function maximization by directly parameterizing the probability that a candidate yields the maximum reward. Our approach, Thompson Sampling via Fine-Tuning (ToSFiT) leverages the prior knowledge embedded in prompt-conditioned large language models, and incrementally adapts them toward the posterior. Theoretically, we derive a novel regret bound for a variational formulation of Thompson Sampling that matches the strong guarantees of its standard counterpart. Our analysis reveals the critical role of careful adaptation to the posterior probability of maximality--a principle that underpins our ToSFiT algorithm. Empirically, we validate our method on three diverse tasks: FAQ response refinement, thermally stable protein search, and quantum circuit design. We demonstrate that online fine-tuning significantly improves sample efficiency, with negligible impact on computational efficiency.
Similar Papers
Thompson Sampling for Multi-Objective Linear Contextual Bandit
Machine Learning (Stat)
Helps computers make better choices with many goals.
No-Regret Thompson Sampling for Finite-Horizon Markov Decision Processes with Gaussian Processes
Machine Learning (CS)
Helps smart robots learn faster in new situations.
Learning to Think: Information-Theoretic Reinforcement Fine-Tuning for LLMs
Machine Learning (CS)
Makes AI think smarter with fewer steps.