Foundations of Interpretable Models
By: Pietro Barbiero , Mateo Espinosa Zarlenga , Alberto Termine and more
Potential Business Impact:
Makes AI easier to understand and build.
We argue that existing definitions of interpretability are not actionable in that they fail to inform users about general, sound, and robust interpretable model design. This makes current interpretability research fundamentally ill-posed. To address this issue, we propose a definition of interpretability that is general, simple, and subsumes existing informal notions within the interpretable AI community. We show that our definition is actionable, as it directly reveals the foundational properties, underlying assumptions, principles, data structures, and architectural features necessary for designing interpretable models. Building on this, we propose a general blueprint for designing interpretable models and introduce the first open-sourced library with native support for interpretable data structures and processes.
Similar Papers
Towards the Formalization of a Trustworthy AI for Mining Interpretable Models explOiting Sophisticated Algorithms
Artificial Intelligence
Makes AI fair, private, and understandable.
Interpretability as Alignment: Making Internal Understanding a Design Principle
Machine Learning (CS)
Makes AI understandable and safe for people.
Machine Learning for Medicine Must Be Interpretable, Shareable, Reproducible and Accountable by Design
Machine Learning (CS)
Makes medical AI trustworthy and understandable.