Score: 1

Learning to Incentivize: LLM-Empowered Contract for AIGC Offloading in Teleoperation

Published: August 5, 2025 | arXiv ID: 2508.03464v1

By: Zijun Zhan , Yaxian Dong , Daniel Mawunyo Doe and more

Potential Business Impact:

Helps AI services give better results fairly.

With the rapid growth in demand for AI-generated content (AIGC), edge AIGC service providers (ASPs) have become indispensable. However, designing incentive mechanisms that motivate ASPs to deliver high-quality AIGC services remains a challenge, especially in the presence of information asymmetry. In this paper, we address bonus design between a teleoperator and an edge ASP when the teleoperator cannot observe the ASP's private settings and chosen actions (diffusion steps). We formulate this as an online learning contract design problem and decompose it into two subproblems: ASP's settings inference and contract derivation. To tackle the NP-hard setting-inference subproblem with unknown variable sizes, we introduce a large language model (LLM)-empowered framework that iteratively refines a naive seed solver using the LLM's domain expertise. Upon obtaining the solution from the LLM-evolved solver, we directly address the contract derivation problem using convex optimization techniques and obtain a near-optimal contract. Simulation results on our Unity-based teleoperation platform show that our method boosts the teleoperator's utility by $5 \sim 40\%$ compared to benchmarks, while preserving positive incentives for the ASP. The code is available at https://github.com/Zijun0819/llm4contract.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
14 pages

Category
Computer Science:
Computational Engineering, Finance, and Science