LLM-Guided Compositional Program Synthesis
By: Ruhma Khan , Sumit Gulwani , Vu Le and more
Potential Business Impact:
Breaks big computer jobs into small ones.
Program synthesis from input-output examples, also called programming by example (PBE), has had tremendous impact on automating end-user tasks. Large language models (LLMs) have the ability to solve PBE tasks by generating code in different target languages, but they can fail unpredictably. To recover for failure, most approaches, such as self-reflection, use the LLM to solve the same task, but with a richer context. We introduce a novel technique that recovers from failure by constructing simpler subtasks for the LLM to solve. Our approach performs compositional program synthesis using LLMs, where LLM not only guides the decomposition of the PBE task into subtasks, but also solves the subtasks. We present different strategies for decomposing the original task. We experimentally show that our approach can solve challenging task instances that are not solved by self-reflection alone.
Similar Papers
Can LLMs Reason About Program Semantics? A Comprehensive Evaluation of LLMs on Formal Specification Inference
Programming Languages
Tests if computers can understand code logic.
Compositional Translation: A Novel LLM-based Approach for Low-resource Machine Translation
Computation and Language
Translates sentences better by breaking them down.
Can LLMs Formally Reason as Abstract Interpreters for Program Analysis?
Machine Learning (CS)
Helps computers check computer code for mistakes.