MAPGD: Multi-Agent Prompt Gradient Descent for Collaborative Prompt Optimization
By: Yichen Han , Bojun Liu , Zhengpeng zhou and more
Potential Business Impact:
Helps AI understand instructions better and faster.
Prompt engineering is crucial for leveraging large language models (LLMs), but existing methods often rely on a single optimization trajectory, limiting adaptability and efficiency while suffering from narrow perspectives, gradient conflicts, and high computational cost. We propose MAPGD (Multi-Agent Prompt Gradient Descent), a framework integrating multi-agent collaboration with gradient-based optimization. MAPGD features specialized agents for task clarity, example selection, format design, and stylistic refinement; semantic gradient coordination to resolve conflicts; bandit-based candidate selection for efficient exploration-exploitation; and theoretical convergence guarantees. Experiments on classification, generation, and reasoning tasks show MAPGD outperforms single-agent and random baselines in accuracy and efficiency. Ablations confirm the benefits of gradient fusion, agent specialization, and conflict resolution, providing a unified, gradient-inspired multi-agent approach to robust and interpretable prompt optimization.
Similar Papers
Learning Optimal Prompt Ensemble for Multi-source Visual Prompt Transfer
Computation and Language
Combines AI knowledge for better task learning.
MODP: Multi Objective Directional Prompting
Computational Complexity
Makes computer helpers understand instructions better.
HGMP:Heterogeneous Graph Multi-Task Prompt Learning
Machine Learning (CS)
Helps computers understand complex data better.