Score: 1

LLMs for Resource Allocation: A Participatory Budgeting Approach to Inferring Preferences

Published: August 8, 2025 | arXiv ID: 2508.06060v1

By: Sankarshan Damle, Boi Faltings

Potential Business Impact:

Helps computers fairly share money for projects.

Large Language Models (LLMs) are increasingly expected to handle complex decision-making tasks, yet their ability to perform structured resource allocation remains underexplored. Evaluating their reasoning is also difficult due to data contamination and the static nature of existing benchmarks. We present a dual-purpose framework leveraging Participatory Budgeting (PB) both as (i) a practical setting for LLM-based resource allocation and (ii) an adaptive benchmark for evaluating their reasoning capabilities. We task LLMs with selecting project subsets under feasibility (e.g., budget) constraints via three prompting strategies: greedy selection, direct optimization, and a hill-climbing-inspired refinement. We benchmark LLMs' allocations against a utility-maximizing oracle. Interestingly, we also test whether LLMs can infer structured preferences from natural-language voter input or metadata, without explicit votes. By comparing allocations based on inferred preferences to those from ground-truth votes, we evaluate LLMs' ability to extract preferences from open-ended input. Our results underscore the role of prompt design and show that LLMs hold promise for mechanism design with unstructured inputs.

Country of Origin
🇨🇭 Switzerland

Repos / Data Links

Page Count
17 pages

Category
Computer Science:
Artificial Intelligence