$L_1$-norm Regularized Indefinite Kernel Logistic Regression
By: Shaoxin Wang, Hanjing Yao
Potential Business Impact:
Finds patterns in data better and shows why.
Kernel logistic regression (KLR) is a powerful classification method widely applied across diverse domains. In many real-world scenarios, indefinite kernels capture more domain-specific structural information than positive definite kernels. This paper proposes a novel $L_1$-norm regularized indefinite kernel logistic regression (RIKLR) model, which extends the existing IKLR framework by introducing sparsity via an $L_1$-norm penalty. The introduction of this regularization enhances interpretability and generalization while introducing nonsmoothness and nonconvexity into the optimization landscape. To address these challenges, a theoretically grounded and computationally efficient proximal linearized algorithm is developed. Experimental results on multiple benchmark datasets demonstrate the superior performance of the proposed method in terms of both accuracy and sparsity.
Similar Papers
Interpretable Kernels
Machine Learning (Stat)
Makes AI explain its decisions using original data.
Kernel Logistic Regression Learning for High-Capacity Hopfield Networks
Machine Learning (CS)
Stores way more memories in computer brains.
Kernel Regression in Structured Non-IID Settings: Theory and Implications for Denoising Score Learning
Machine Learning (Stat)
Helps computers learn from messy, connected data.