Matching Markets Meet LLMs: Algorithmic Reasoning with Ranked Preferences
By: Hadi Hosseini, Samarth Khanna, Ronak Singh
Potential Business Impact:
Helps computers match people fairly, like dating.
The rise of Large Language Models (LLMs) has driven progress in reasoning tasks -- from program synthesis to scientific hypothesis generation -- yet their ability to handle ranked preferences and structured algorithms in combinatorial domains remains underexplored. We study matching markets, a core framework behind applications like resource allocation and ride-sharing, which require reconciling individual ranked preferences to ensure stable outcomes. We evaluate several state-of-the-art models on a hierarchy of preference-based reasoning tasks -- ranging from stable-matching generation to instability detection, instability resolution, and fine-grained preference queries -- to systematically expose their logical and algorithmic limitations in handling ranked inputs. Surprisingly, even top-performing models with advanced reasoning struggle to resolve instability in large markets, often failing to identify blocking pairs or execute algorithms iteratively. We further show that parameter-efficient fine-tuning (LoRA) significantly improves performance in small markets, but fails to bring about a similar improvement on large instances, suggesting the need for more sophisticated strategies to improve LLMs' reasoning with larger-context inputs.
Similar Papers
Reasoning with Preference Constraints: A Benchmark for Language Models in Many-to-One Matching Markets
Artificial Intelligence
Helps computers match students to colleges fairly.
How Reliable are LLMs for Reasoning on the Re-ranking task?
Computation and Language
Shows how computers learn to explain their choices.
LLMs for Resource Allocation: A Participatory Budgeting Approach to Inferring Preferences
Artificial Intelligence
Helps computers fairly share money for projects.