Score: 0

Scientific Machine Learning with Kolmogorov-Arnold Networks

Published: July 30, 2025 | arXiv ID: 2507.22959v1

By: Salah A. Faroughi , Farinaz Mostajeran , Amin Hamed Mashhadzadeh and more

Potential Business Impact:

Makes computers learn and understand better.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

The field of scientific machine learning, which originally utilized multilayer perceptrons (MLPs), is increasingly adopting Kolmogorov-Arnold Networks (KANs) for data encoding. This shift is driven by the limitations of MLPs, including poor interpretability, fixed activation functions, and difficulty capturing localized or high-frequency features. KANs address these issues with enhanced interpretability and flexibility, enabling more efficient modeling of complex nonlinear interactions and effectively overcoming the constraints associated with conventional MLP architectures. This review categorizes recent progress in KAN-based models across three distinct perspectives: (i) data-driven learning, (ii) physics-informed modeling, and (iii) deep operator learning. Each perspective is examined through the lens of architectural design, training strategies, application efficacy, and comparative evaluation against MLP-based counterparts. By benchmarking KANs against MLPs, we highlight consistent improvements in accuracy, convergence, and spectral representation, clarifying KANs' advantages in capturing complex dynamics while learning more effectively. Finally, this review identifies critical challenges and open research questions in KAN development, particularly regarding computational efficiency, theoretical guarantees, hyperparameter tuning, and algorithm complexity. We also outline future research directions aimed at improving the robustness, scalability, and physical consistency of KAN-based frameworks.

Country of Origin
🇺🇸 United States

Page Count
44 pages

Category
Computer Science:
Machine Learning (CS)