Score: 0

XChoice: Explainable Evaluation of AI-Human Alignment in LLM-based Constrained Choice Decision Making

Published: January 16, 2026 | arXiv ID: 2601.11286v1

By: Weihong Qi , Fan Huang , Rasika Muralidharan and more

Potential Business Impact:

Helps AI understand why people make choices.

Business Areas:
Artificial Intelligence Artificial Intelligence, Data and Analytics, Science and Engineering, Software

We present XChoice, an explainable framework for evaluating AI-human alignment in constrained decision making. Moving beyond outcome agreement such as accuracy and F1 score, XChoice fits a mechanism-based decision model to human data and LLM-generated decisions, recovering interpretable parameters that capture the relative importance of decision factors, constraint sensitivity, and implied trade-offs. Alignment is assessed by comparing these parameter vectors across models, options, and subgroups. We demonstrate XChoice on Americans' daily time allocation using the American Time Use Survey (ATUS) as human ground truth, revealing heterogeneous alignment across models and activities and salient misalignment concentrated in Black and married groups. We further validate robustness of XChoice via an invariance analysis and evaluate targeted mitigation with a retrieval augmented generation (RAG) intervention. Overall, XChoice provides mechanism-based metrics that diagnose misalignment and support informed improvements beyond surface outcome matching.

Country of Origin
🇺🇸 United States

Page Count
19 pages

Category
Computer Science:
Artificial Intelligence