LLMs for Resource Allocation: A Participatory Budgeting Approach to Inferring Preferences
By: Sankarshan Damle, Boi Faltings
Potential Business Impact:
Helps computers fairly share money for projects.
Large Language Models (LLMs) are increasingly expected to handle complex decision-making tasks, yet their ability to perform structured resource allocation remains underexplored. Evaluating their reasoning is also difficult due to data contamination and the static nature of existing benchmarks. We present a dual-purpose framework leveraging Participatory Budgeting (PB) both as (i) a practical setting for LLM-based resource allocation and (ii) an adaptive benchmark for evaluating their reasoning capabilities. We task LLMs with selecting project subsets under feasibility (e.g., budget) constraints via three prompting strategies: greedy selection, direct optimization, and a hill-climbing-inspired refinement. We benchmark LLMs' allocations against a utility-maximizing oracle. Interestingly, we also test whether LLMs can infer structured preferences from natural-language voter input or metadata, without explicit votes. By comparing allocations based on inferred preferences to those from ground-truth votes, we evaluate LLMs' ability to extract preferences from open-ended input. Our results underscore the role of prompt design and show that LLMs hold promise for mechanism design with unstructured inputs.
Similar Papers
Adaptive LLM Routing under Budget Constraints
Machine Learning (CS)
Chooses best AI for your question.
Adaptive LLM Routing under Budget Constraints
Machine Learning (CS)
Chooses best AI for your question, saving time.
Evaluating and Aligning Human Economic Risk Preferences in LLMs
General Economics
Makes AI make smarter money choices.