LILO: Bayesian Optimization with Interactive Natural Language Feedback
By: Katarzyna Kobalczyk , Zhiyuan Jerry Lin , Benjamin Letham and more
Potential Business Impact:
Lets computers learn from your spoken feedback.
For many real-world applications, feedback is essential in translating complex, nuanced, or subjective goals into quantifiable optimization objectives. We propose a language-in-the-loop framework that uses a large language model (LLM) to convert unstructured feedback in the form of natural language into scalar utilities to conduct BO over a numeric search space. Unlike preferential BO, which only accepts restricted feedback formats and requires customized models for each domain-specific problem, our approach leverages LLMs to turn varied types of textual feedback into consistent utility signals and to easily include flexible user priors without manual kernel design. At the same time, our method maintains the sample efficiency and principled uncertainty quantification of BO. We show that this hybrid method not only provides a more natural interface to the decision maker but also outperforms conventional BO baselines and LLM-only optimizers, particularly in feedback-limited regimes.
Similar Papers
Large Scale Multi-Task Bayesian Optimization with Large Language Models
Machine Learning (CS)
AI learns from past jobs to do new ones better.
Cooperative Design Optimization through Natural Language Interaction
Human-Computer Interaction
Lets designers talk to computers to design better things.
Distilling and exploiting quantitative insights from Large Language Models for enhanced Bayesian optimization of chemical reactions
Machine Learning (CS)
Teaches computers to find better ways to make chemicals.