Hyb-KAN ViT: Hybrid Kolmogorov-Arnold Networks Augmented Vision Transformer
By: Sainath Dey , Mitul Goswami , Jashika Sethi and more
Potential Business Impact:
Makes computer vision models smarter and faster.
This study addresses the inherent limitations of Multi-Layer Perceptrons (MLPs) in Vision Transformers (ViTs) by introducing Hybrid Kolmogorov-Arnold Network (KAN)-ViT (Hyb-KAN ViT), a novel framework that integrates wavelet-based spectral decomposition and spline-optimized activation functions, prior work has failed to focus on the prebuilt modularity of the ViT architecture and integration of edge detection capabilities of Wavelet functions. We propose two key modules: Efficient-KAN (Eff-KAN), which replaces MLP layers with spline functions and Wavelet-KAN (Wav-KAN), leveraging orthogonal wavelet transforms for multi-resolution feature extraction. These modules are systematically integrated in ViT encoder layers and classification heads to enhance spatial-frequency modeling while mitigating computational bottlenecks. Experiments on ImageNet-1K (Image Recognition), COCO (Object Detection and Instance Segmentation), and ADE20K (Semantic Segmentation) demonstrate state-of-the-art performance with Hyb-KAN ViT. Ablation studies validate the efficacy of wavelet-driven spectral priors in segmentation and spline-based efficiency in detection tasks. The framework establishes a new paradigm for balancing parameter efficiency and multi-scale representation in vision architectures.
Similar Papers
ViKANformer: Embedding Kolmogorov Arnold Networks in Vision Transformers for Pattern-Based Learning
CV and Pattern Recognition
Makes computer vision smarter by learning better.
Kolmogorov-Arnold Attention: Is Learnable Attention Better For Vision Transformers?
Machine Learning (CS)
Makes AI understand pictures better by learning attention.
When Swin Transformer Meets KANs: An Improved Transformer Architecture for Medical Image Segmentation
CV and Pattern Recognition
Helps doctors see inside bodies better with less data.