Score: 1

MAPGD: Multi-Agent Prompt Gradient Descent for Collaborative Prompt Optimization

Published: September 14, 2025 | arXiv ID: 2509.11361v1

By: Yichen Han , Bojun Liu , Zhengpeng zhou and more

Potential Business Impact:

Helps AI understand instructions better and faster.

Business Areas:
Guides Media and Entertainment

Prompt engineering is crucial for leveraging large language models (LLMs), but existing methods often rely on a single optimization trajectory, limiting adaptability and efficiency while suffering from narrow perspectives, gradient conflicts, and high computational cost. We propose MAPGD (Multi-Agent Prompt Gradient Descent), a framework integrating multi-agent collaboration with gradient-based optimization. MAPGD features specialized agents for task clarity, example selection, format design, and stylistic refinement; semantic gradient coordination to resolve conflicts; bandit-based candidate selection for efficient exploration-exploitation; and theoretical convergence guarantees. Experiments on classification, generation, and reasoning tasks show MAPGD outperforms single-agent and random baselines in accuracy and efficiency. Ablations confirm the benefits of gradient fusion, agent specialization, and conflict resolution, providing a unified, gradient-inspired multi-agent approach to robust and interpretable prompt optimization.

Country of Origin
🇨🇳 🇨🇦 Canada, China

Page Count
13 pages

Category
Computer Science:
Artificial Intelligence