Eliciting Truthful Feedback for Preference-Based Learning via the VCG Mechanism
By: Leo Landolt , Anna Maddux , Andreas Schlaginhaufen and more
Potential Business Impact:
Helps people share things fairly, even if they lie.
We study resource allocation problems in which a central planner allocates resources among strategic agents with private cost functions in order to minimize a social cost, defined as an aggregate of the agents' costs. This setting poses two main challenges: (i) the agents' cost functions may be unknown to them or difficult to specify explicitly, and (ii) agents may misreport their costs strategically. To address these challenges, we propose an algorithm that combines preference-based learning with Vickrey-Clarke-Groves (VCG) payments to incentivize truthful reporting. Our algorithm selects informative preference queries via D-optimal design, estimates cost parameters through maximum likelihood, and computes VCG allocations and payments based on these estimates. In a one-shot setting, we prove that the mechanism is approximately truthful, individually rational, and efficient up to an error of $\tilde{\mathcal O}(K^{-1/2})$ for $K$ preference queries per agent. In an online setting, these guarantees hold asymptotically with sublinear regret at a rate of $\tilde{\mathcal O}(T^{2/3})$ after $T$ rounds. Finally, we validate our approach through a numerical case study on demand response in local electricity markets.
Similar Papers
Truthful Double Auctions under Approximate VCG: Immediate-Penalty Enforcement in P2P Energy Trading
CS and Game Theory
Makes online trading fair even when it's tricky.
Truthful and Trustworthy IoT AI Agents via Immediate-Penalty Enforcement under Approximate VCG Mechanisms
CS and Game Theory
Makes smart homes trade energy fairly and safely.
From Fairness to Truthfulness: Rethinking Data Valuation Design
CS and Game Theory
Pays people fairly for data used by AI.