Score: 1

Leveraging KANs for Expedient Training of Multichannel MLPs via Preconditioning and Geometric Refinement

Published: May 23, 2025 | arXiv ID: 2505.18131v1

By: Jonas A. Actor , Graham Harper , Ben Southworth and more

Potential Business Impact:

Makes computer learning models train faster.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Multilayer perceptrons (MLPs) are a workhorse machine learning architecture, used in a variety of modern deep learning frameworks. However, recently Kolmogorov-Arnold Networks (KANs) have become increasingly popular due to their success on a range of problems, particularly for scientific machine learning tasks. In this paper, we exploit the relationship between KANs and multichannel MLPs to gain structural insight into how to train MLPs faster. We demonstrate the KAN basis (1) provides geometric localized support, and (2) acts as a preconditioned descent in the ReLU basis, overall resulting in expedited training and improved accuracy. Our results show the equivalence between free-knot spline KAN architectures, and a class of MLPs that are refined geometrically along the channel dimension of each weight tensor. We exploit this structural equivalence to define a hierarchical refinement scheme that dramatically accelerates training of the multi-channel MLP architecture. We show further accuracy improvements can be had by allowing the $1$D locations of the spline knots to be trained simultaneously with the weights. These advances are demonstrated on a range of benchmark examples for regression and scientific machine learning.

Page Count
20 pages

Category
Computer Science:
Machine Learning (CS)