Automatic Prompt Optimization with Prompt Distillation
By: Viktor N. Zhuravlev , Artur R. Khairullin , Ernest A. Dyagin and more
Potential Business Impact:
Teaches computers to write better answers automatically.
Autoprompting is the process of automatically selecting optimized prompts for language models, which is gaining popularity due to the rapid development of prompt engineering driven by extensive research in the field of large language models (LLMs). This paper presents DistillPrompt -- a novel autoprompting method based on large language models that employs a multi-stage integration of task-specific information into prompts using training data. DistillPrompt utilizes distillation, compression, and aggregation operations to explore the prompt space more thoroughly. The method was tested on different datasets for text classification and generation tasks using the t-lite-instruct-0.1 language model. The results demonstrate a significant average improvement (e.g., 20.12% across the entire dataset compared to Grips) in key metrics over existing methods in the field, establishing DistillPrompt as one of the most effective non-gradient approaches in autoprompting.
Similar Papers
Automatic Prompt Optimization with Prompt Distillation
Computation and Language
Makes AI understand tasks better with smart instructions.
Automatic Prompt Generation via Adaptive Selection of Prompting Techniques
Computation and Language
Makes computers understand instructions better automatically.
ReflectivePrompt: Reflective evolution in autoprompting algorithms
Computation and Language
Finds best computer instructions for better results.