Score: 0

Almost Right: Making First-layer Kernels Nearly Orthogonal Improves Model Generalization

Published: April 23, 2025 | arXiv ID: 2504.16362v1

By: Colton R. Crum, Adam Czajka

Potential Business Impact:

Helps computers learn new things better.

Business Areas:
Image Recognition Data and Analytics, Software

An ongoing research challenge within several domains in computer vision is how to increase model generalization capabilities. Several attempts to improve model generalization performance are heavily inspired by human perceptual intelligence, which is remarkable in both its performance and efficiency to generalize to unknown samples. Many of these methods attempt to force portions of the network to be orthogonal, following some observation within neuroscience related to early vision processes. In this paper, we propose a loss component that regularizes the filtering kernels in the first convolutional layer of a network to make them nearly orthogonal. Deviating from previous works, we give the network flexibility in which pairs of kernels it makes orthogonal, allowing the network to navigate to a better solution space, imposing harsh penalties. Without architectural modifications, we report substantial gains in generalization performance using the proposed loss against previous works (including orthogonalization- and saliency-based regularization methods) across three different architectures (ResNet-50, DenseNet-121, ViT-b-16) and two difficult open-set recognition tasks: presentation attack detection in iris biometrics, and anomaly detection in chest X-ray images.

Country of Origin
🇺🇸 United States

Page Count
8 pages

Category
Computer Science:
CV and Pattern Recognition