Score: 0

Interpreto: An Explainability Library for Transformers

Published: December 10, 2025 | arXiv ID: 2512.09730v1

By: Antonin Poché , Thomas Mullor , Gabriele Sarti and more

Interpreto is a Python library for post-hoc explainability of text HuggingFace models, from early BERT variants to LLMs. It provides two complementary families of methods: attributions and concept-based explanations. The library connects recent research to practical tooling for data scientists, aiming to make explanations accessible to end users. It includes documentation, examples, and tutorials. Interpreto supports both classification and generation models through a unified API. A key differentiator is its concept-based functionality, which goes beyond feature-level attributions and is uncommon in existing libraries. The library is open source; install via pip install interpreto. Code and documentation are available at https://github.com/FOR-sight-ai/interpreto.

Category
Computer Science:
Computation and Language