Model Performance-Guided Evaluation Data Selection for Effective Prompt Optimization
By: Ximing Dong , Shaowei Wang , Dayi Lin and more
Potential Business Impact:
Makes AI better at understanding prompts.
Optimizing Large Language Model (LLM) performance requires well-crafted prompts, but manual prompt engineering is labor-intensive and often ineffective. Automated prompt optimization techniques address this challenge but the majority of them rely on randomly selected evaluation subsets, which fail to represent the full dataset, leading to unreliable evaluations and suboptimal prompts. Existing coreset selection methods, designed for LLM benchmarking, are unsuitable for prompt optimization due to challenges in clustering similar samples, high data collection costs, and the unavailability of performance data for new or private datasets. To overcome these issues, we propose IPOMP, an Iterative evaluation data selection for effective Prompt Optimization using real-time Model Performance. IPOMP is a two-stage approach that selects representative and diverse samples using semantic clustering and boundary analysis, followed by iterative refinement with real-time model performance data to replace redundant samples. Evaluations on the BIG-bench dataset show that IPOMP improves effectiveness by 1.6% to 5.3% and stability by at least 57% compared with SOTA baselines, with minimal computational overhead below 1%. Furthermore, the results demonstrate that our real-time performance-guided refinement approach can be universally applied to enhance existing coreset selection methods.
Similar Papers
MOPrompt: Multi-objective Semantic Evolution for Prompt Optimization
Computation and Language
Makes AI smarter and use less computer power.
ELPO: Ensemble Learning Based Prompt Optimization for Large Language Models
Computation and Language
Makes AI better at tasks by finding best instructions.
Local Prompt Optimization
Computation and Language
Helps AI write better answers by focusing on key words.