Score: 2

Implementing Rational Choice Functions with LLMs and Measuring their Alignment with User Preferences

Published: April 22, 2025 | arXiv ID: 2504.15719v1

By: Anna Karnysheva, Christian Drescher, Dietrich Klakow

BigTech Affiliations: Mercedes-Benz

Potential Business Impact:

Helps computers make choices users prefer.

Business Areas:
Personalization Commerce and Shopping

As large language models (LLMs) become integral to intelligent user interfaces (IUIs), their role as decision-making agents raises critical concerns about alignment. Although extensive research has addressed issues such as factuality, bias, and toxicity, comparatively little attention has been paid to measuring alignment to preferences, i.e., the relative desirability of different alternatives, a concept used in decision making, economics, and social choice theory. However, a reliable decision-making agent makes choices that align well with user preferences. In this paper, we generalize existing methods that exploit LLMs for ranking alternative outcomes by addressing alignment with the broader and more flexible concept of user preferences, which includes both strict preferences and indifference among alternatives. To this end, we put forward design principles for using LLMs to implement rational choice functions, and provide the necessary tools to measure preference satisfaction. We demonstrate the applicability of our approach through an empirical study in a practical application of an IUI in the automotive domain.

Country of Origin
🇩🇪 Germany

Page Count
12 pages

Category
Computer Science:
Artificial Intelligence