Score: 0

Thompson Sampling via Fine-Tuning of LLMs

Published: October 15, 2025 | arXiv ID: 2510.13328v1

By: Nicolas Menet , Aleksandar Terzić , Andreas Krause and more

Potential Business Impact:

Teaches computers to find best answers faster.

Business Areas:
A/B Testing Data and Analytics

Bayesian optimization in large unstructured discrete spaces is often hindered by the computational cost of maximizing acquisition functions due to the absence of gradients. We propose a scalable alternative based on Thompson sampling that eliminates the need for acquisition function maximization by directly parameterizing the probability that a candidate yields the maximum reward. Our approach, Thompson Sampling via Fine-Tuning (ToSFiT) leverages the prior knowledge embedded in prompt-conditioned large language models, and incrementally adapts them toward the posterior. Theoretically, we derive a novel regret bound for a variational formulation of Thompson Sampling that matches the strong guarantees of its standard counterpart. Our analysis reveals the critical role of careful adaptation to the posterior probability of maximality--a principle that underpins our ToSFiT algorithm. Empirically, we validate our method on three diverse tasks: FAQ response refinement, thermally stable protein search, and quantum circuit design. We demonstrate that online fine-tuning significantly improves sample efficiency, with negligible impact on computational efficiency.

Page Count
24 pages

Category
Computer Science:
Machine Learning (CS)