Score: 1

Neural Logic Networks for Interpretable Classification

Published: August 11, 2025 | arXiv ID: 2508.08172v1

By: Vincent Perreault , Katsumi Inoue , Richard Labib and more

Potential Business Impact:

Lets computers explain their decisions like a puzzle.

Traditional neural networks have an impressive classification performance, but what they learn cannot be inspected, verified or extracted. Neural Logic Networks on the other hand have an interpretable structure that enables them to learn a logical mechanism relating the inputs and outputs with AND and OR operations. We generalize these networks with NOT operations and biases that take into account unobserved data and develop a rigorous logical and probabilistic modeling in terms of concept combinations to motivate their use. We also propose a novel factorized IF-THEN rule structure for the model as well as a modified learning algorithm. Our method improves the state-of-the-art in Boolean networks discovery and is able to learn relevant, interpretable rules in tabular classification, notably on an example from the medical field where interpretability has tangible value.

Country of Origin
🇨🇦 Canada

Page Count
49 pages

Category
Computer Science:
Machine Learning (CS)