Score: 3

Towards Interpretable Deep Neural Networks for Tabular Data

Published: September 10, 2025 | arXiv ID: 2509.08617v1

By: Khawla Elhadri, Jörg Schlötterer, Christin Seifert

Potential Business Impact:

Explains computer decisions made from data.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Tabular data is the foundation of many applications in fields such as finance and healthcare. Although DNNs tailored for tabular data achieve competitive predictive performance, they are blackboxes with little interpretability. We introduce XNNTab, a neural architecture that uses a sparse autoencoder (SAE) to learn a dictionary of monosemantic features within the latent space used for prediction. Using an automated method, we assign human-interpretable semantics to these features. This allows us to represent predictions as linear combinations of semantically meaningful components. Empirical evaluations demonstrate that XNNTab attains performance on par with or exceeding that of state-of-the-art, black-box neural models and classical machine learning approaches while being fully interpretable.

Country of Origin
🇩🇪 Germany


Page Count
10 pages

Category
Computer Science:
Machine Learning (CS)