MaxShapley: Towards Incentive-compatible Generative Search with Fair Context Attribution
By: Sara Patel, Mingxun Zhou, Giulia Fanti
Generative search engines based on large language models (LLMs) are replacing traditional search, fundamentally changing how information providers are compensated. To sustain this ecosystem, we need fair mechanisms to attribute and compensate content providers based on their contributions to generated answers. We introduce MaxShapley, an efficient algorithm for fair attribution in generative search pipelines that use retrieval-augmented generation (RAG). MaxShapley is a special case of the celebrated Shapley value; it leverages a decomposable max-sum utility function to compute attributions with linear computation in the number of documents, as opposed to the exponential cost of Shapley values. We evaluate MaxShapley on three multi-hop QA datasets (HotPotQA, MuSiQUE, MS MARCO); MaxShapley achieves comparable attribution quality to exact Shapley computation, while consuming a fraction of its tokens--for instance, it gives up to an 8x reduction in resource consumption over prior state-of-the-art methods at the same attribution accuracy.
Similar Papers
Deep Learning-Accelerated Shapley Value for Fair Allocation in Power Systems: The Case of Carbon Emission Responsibility
Systems and Control
Fairly divides pollution costs for power grids.
Shapley-Coop: Credit Assignment for Emergent Cooperation in Self-Interested LLM Agents
Multiagent Systems
Makes AI agents share tasks fairly and work together.
A Ratio-Based Shapley Value for Collaborative Machine Learning - Extended Version
CS and Game Theory
Fairly shares credit when computers learn together.