Lightweight Channel Attention for Efficient CNNs
By: Prem Babu Kanaparthi, Tulasi Venkata Sri Varshini Padamata
Potential Business Impact:
Makes computer vision faster and smarter.
Attention mechanisms have become integral to modern convolutional neural networks (CNNs), delivering notable performance improvements with minimal computational overhead. However, the efficiency accuracy trade off of different channel attention designs remains underexplored. This work presents an empirical study comparing Squeeze and Excitation (SE), Efficient Channel Attention (ECA), and a proposed Lite Channel Attention (LCA) module across ResNet 18 and MobileNetV2 architectures on CIFAR 10. LCA employs adaptive one dimensional convolutions with grouped operations to reduce parameter usage while preserving effective attention behavior. Experimental results show that LCA achieves competitive accuracy, reaching 94.68 percent on ResNet 18 and 93.10 percent on MobileNetV2, while matching ECA in parameter efficiency and maintaining favorable inference latency. Comprehensive benchmarks including FLOPs, parameter counts, and GPU latency measurements are provided, offering practical insights for deploying attention enhanced CNNs in resource constrained environments.
Similar Papers
Systematic Integration of Attention Modules into CNNs for Accurate and Generalizable Medical Image Diagnosis
CV and Pattern Recognition
Helps doctors spot sickness in medical pictures better.
Achieving 3D Attention via Triplet Squeeze and Excitation Block
CV and Pattern Recognition
Helps computers understand emotions from faces better.
An Efficient Medical Image Classification Method Based on a Lightweight Improved ConvNeXt-Tiny Architecture
CV and Pattern Recognition
Helps doctors find diseases faster with less computer power.