An Exploratory Study of Bayesian Prompt Optimization for Test-Driven Code Generation with Large Language Models
By: Shlok Tomar , Aryan Deshwal , Ethan Villalovoz and more
We consider the task of generating functionally correct code using large language models (LLMs). The correctness of generated code is influenced by the prompt used to query the given base LLM. We formulate the problem of finding the appropriate prompt as combinatorial search process and propose a Bayesian optimization (BO) approach referred to as {\em BO for Code GENeration (BODE-GEN)}. BODE-GEN performs an adaptive data-driven search over prompts guided by training data in the form of prompts tried and the functional accuracy of the generated code over a set of given test cases. The key insight is to perform BO in continuous embedding space by using an auxiliary LLM to bridge the gap between discrete prompt space and continuous embedding space. We leverage two synergistic ideas, namely, random projections and dimensionality scaled priors, to build effective Gaussian process based surrogate models over the high-dimensional embedding space. Our experiments on the HumanEval+ benchmark using multiple base LLMs show that BODE-GEN can improve performance in terms of code generation accuracy compared to fixed prompts and manual prompt engineering. Additionally, we demonstrate that BODE-GEN is sample-efficient, requiring relatively few iterations of BO to demonstrate improvements in code accuracy.
Similar Papers
ELPO: Ensemble Learning Based Prompt Optimization for Large Language Models
Computation and Language
Makes AI better at tasks by finding best instructions.
LatentPrompt: Optimizing Promts in Latent Space
Computation and Language
Makes AI understand jobs better, automatically.
Reproducibility Study of Large Language Model Bayesian Optimization
Computation and Language
Makes AI learn faster using smart text guesses.