LISTEN to Your Preferences: An LLM Framework for Multi-Objective Selection
By: Adam S. Jovine , Tinghan Ye , Francis Bahk and more
Potential Business Impact:
Helps computers pick best choices using your words.
Human experts often struggle to select the best option from a large set of items with multiple competing objectives, a process bottlenecked by the difficulty of formalizing complex, implicit preferences. To address this, we introduce LISTEN, a framework that leverages a Large Language Model (LLM) as a zero-shot preference oracle, guided only by an expert's high-level priorities in natural language. To operate within LLM constraints like context windows and inference costs, we propose two iterative algorithms: LISTEN-U, which uses the LLM to refine a parametric utility function, and LISTEN-T, a non-parametric method that performs tournament-style selections over small batches of solutions. Evaluated on diverse tasks including flight booking, shopping, and exam scheduling, our results show LISTEN-U excels when preferences are parametrically aligned (a property we measure with a novel concordance metric), while LISTEN-T offers more robust performance. This work explores a promising direction for steering complex multi-objective decisions directly with natural language, reducing the cognitive burden of traditional preference elicitation.
Similar Papers
Learning LLM Preference over Intra-Dialogue Pairs: A Framework for Utterance-level Understandings
Computation and Language
Makes smart computer chats faster and more accurate.
LLMs for Resource Allocation: A Participatory Budgeting Approach to Inferring Preferences
Artificial Intelligence
Helps computers fairly share money for projects.
Evaluating Podcast Recommendations with Profile-Aware LLM-as-a-Judge
Information Retrieval
AI judges podcast picks like a person.