Score: 0

TT-FSI: Scalable Faithful Shapley Interactions via Tensor-Train

Published: January 5, 2026 | arXiv ID: 2601.01903v1

By: Ungsik Kim, Suwon Lee

Potential Business Impact:

Makes AI understand how its decisions work.

Business Areas:
A/B Testing Data and Analytics

The Faithful Shapley Interaction (FSI) index uniquely satisfies the faithfulness axiom among Shapley interaction indices, but computing FSI requires $O(d^\ell \cdot 2^d)$ time and existing implementations use $O(4^d)$ memory. We present TT-FSI, which exploits FSI's algebraic structure via Matrix Product Operators (MPO). Our main theoretical contribution is proving that the linear operator $v \mapsto \text{FSI}(v)$ admits an MPO representation with TT-rank $O(\ell d)$, enabling an efficient sweep algorithm with $O(\ell^2 d^3 \cdot 2^d)$ time and $O(\ell d^2)$ core storage an exponential improvement over existing methods. Experiments on six datasets ($d=8$ to $d=20$) demonstrate up to 280$\times$ speedup over baseline, 85$\times$ over SHAP-IQ, and 290$\times$ memory reduction. TT-FSI scales to $d=20$ (1M coalitions) where all competing methods fail.

Country of Origin
🇰🇷 Korea, Republic of

Page Count
18 pages

Category
Computer Science:
Machine Learning (CS)