Enhancing LLM Instruction Following: An Evaluation-Driven Multi-Agentic Workflow for Prompt Instructions Optimization
By: Alberto Purpura , Li Wang , Sahil Badyal and more
Potential Business Impact:
Makes AI follow rules better for correct answers.
Large Language Models (LLMs) often generate substantively relevant content but fail to adhere to formal constraints, leading to outputs that are conceptually correct but procedurally flawed. Traditional prompt refinement approaches focus on rephrasing the description of the primary task an LLM has to perform, neglecting the granular constraints that function as acceptance criteria for its response. We propose a novel multi-agentic workflow that decouples optimization of the primary task description from its constraints, using quantitative scores as feedback to iteratively rewrite and improve them. Our evaluation demonstrates this method produces revised prompts that yield significantly higher compliance scores from models like Llama 3.1 8B and Mixtral-8x 7B.
Similar Papers
Use Me Wisely: AI-Driven Assessment for LLM Prompting Skills Development
Computers and Society
Teaches computers to grade student writing automatically.
Teaching Language Models To Gather Information Proactively
Artificial Intelligence
Helps AI ask better questions to solve problems.
Improving Cooperation in Collaborative Embodied AI
Artificial Intelligence
AI agents work together better using smart instructions.