Who Gets the Reward, Who Gets the Blame? Evaluation-Aligned Training Signals for Multi-LLM Agents
By: Chih-Hsuan Yang , Tanwi Mallick , Le Chen and more
Potential Business Impact:
Teaches AI teams to work together better.
Large Language Models (LLMs) in multi-agent systems (MAS) have shown promise for complex tasks, yet current training methods lack principled ways to connect system-level evaluation with agent-level and message-level learning. We propose a theoretical framework that unifies cooperative game-theoretic attribution with process reward modeling to transform system evaluation into agent credit and then into response-level signals. Unlike prior approaches that rely only on attribution (e.g., Shapley) or step-level labels (e.g., PRM), our method produces local, signed, and credit-conserving signals. In success cases, Shapley-based credit assignment fairly allocates outcomes across agents and is refined into per-message rewards that promote cooperation while discouraging redundancy or sabotage. In failure cases, first-error localization yields repair-aware preferences that penalize harmful steps while rewarding corrective attempts. The resulting signals are bounded, cooperative, and directly compatible with reinforcement-based or preference-based post-training, providing a unified and auditable pathway from global evaluation to local supervision in LLM multi-agent training. Our contribution is conceptual: we present a theoretical foundation and training signals, leaving empirical validation for future work.
Similar Papers
Shapley-Coop: Credit Assignment for Emergent Cooperation in Self-Interested LLM Agents
Multiagent Systems
Makes AI agents share tasks fairly and work together.
Stochastic Self-Organization in Multi-Agent Systems
Multiagent Systems
Agents learn to talk better for smarter answers.
Interaction Dynamics as a Reward Signal for LLMs
Computation and Language
Teaches computers how to talk better by watching how they chat.