Softly Symbolifying Kolmogorov-Arnold Networks
By: James Bagrow, Josh Bongard
Potential Business Impact:
Makes AI understand and explain its own thinking.
Kolmogorov-Arnold Networks (KANs) offer a promising path toward interpretable machine learning: their learnable activations can be studied individually, while collectively fitting complex data accurately. In practice, however, trained activations often lack symbolic fidelity, learning pathological decompositions with no meaningful correspondence to interpretable forms. We propose Softly Symbolified Kolmogorov-Arnold Networks (S2KAN), which integrate symbolic primitives directly into training. Each activation draws from a dictionary of symbolic and dense terms, with learnable gates that sparsify the representation. Crucially, this sparsification is differentiable, enabling end-to-end optimization, and is guided by a principled Minimum Description Length objective. When symbolic terms suffice, S2KAN discovers interpretable forms; when they do not, it gracefully degrades to dense splines. We demonstrate competitive or superior accuracy with substantially smaller models across symbolic benchmarks, dynamical systems forecasting, and real-world prediction tasks, and observe evidence of emergent self-sparsification even without regularization pressure.
Similar Papers
A Practitioner's Guide to Kolmogorov-Arnold Networks
Machine Learning (CS)
Makes computer learning smarter and easier to understand.
Opening the Black-Box: Symbolic Regression with Kolmogorov-Arnold Networks for Energy Applications
Machine Learning (CS)
Makes AI understandable, like math equations.
A Practitioner's Guide to Kolmogorov-Arnold Networks
Machine Learning (CS)
Makes AI smarter and learn faster.