Leveraging KANs for Expedient Training of Multichannel MLPs via Preconditioning and Geometric Refinement
By: Jonas A. Actor , Graham Harper , Ben Southworth and more
Potential Business Impact:
Makes computer learning models train faster.
Multilayer perceptrons (MLPs) are a workhorse machine learning architecture, used in a variety of modern deep learning frameworks. However, recently Kolmogorov-Arnold Networks (KANs) have become increasingly popular due to their success on a range of problems, particularly for scientific machine learning tasks. In this paper, we exploit the relationship between KANs and multichannel MLPs to gain structural insight into how to train MLPs faster. We demonstrate the KAN basis (1) provides geometric localized support, and (2) acts as a preconditioned descent in the ReLU basis, overall resulting in expedited training and improved accuracy. Our results show the equivalence between free-knot spline KAN architectures, and a class of MLPs that are refined geometrically along the channel dimension of each weight tensor. We exploit this structural equivalence to define a hierarchical refinement scheme that dramatically accelerates training of the multi-channel MLP architecture. We show further accuracy improvements can be had by allowing the $1$D locations of the spline knots to be trained simultaneously with the weights. These advances are demonstrated on a range of benchmark examples for regression and scientific machine learning.
Similar Papers
Scientific Machine Learning with Kolmogorov-Arnold Networks
Machine Learning (CS)
Makes computers learn and understand better.
KAN or MLP? Point Cloud Shows the Way Forward
CV and Pattern Recognition
Helps computers understand 3D shapes better.
MatrixKAN: Parallelized Kolmogorov-Arnold Network
Machine Learning (CS)
Makes smart computer brains learn much faster.