Neural expressiveness for beyond importance model compression
By: Angelos-Christos Maroudis, Sotirios Xydis
Potential Business Impact:
Makes computer programs smaller and faster.
Neural Network Pruning has been established as driving force in the exploration of memory and energy efficient solutions with high throughput both during training and at test time. In this paper, we introduce a novel criterion for model compression, named "Expressiveness". Unlike existing pruning methods that rely on the inherent "Importance" of neurons' and filters' weights, ``Expressiveness" emphasizes a neuron's or group of neurons ability to redistribute informational resources effectively, based on the overlap of activations. This characteristic is strongly correlated to a network's initialization state, establishing criterion autonomy from the learning state stateless and thus setting a new fundamental basis for the expansion of compression strategies in regards to the "When to Prune" question. We show that expressiveness is effectively approximated with arbitrary data or limited dataset's representative samples, making ground for the exploration of Data-Agnostic strategies. Our work also facilitates a "hybrid" formulation of expressiveness and importance-based pruning strategies, illustrating their complementary benefits and delivering up to 10x extra gains w.r.t. weight-based approaches in parameter compression ratios, with an average of 1% in performance degradation. We also show that employing expressiveness (independently) for pruning leads to an improvement over top-performing and foundational methods in terms of compression efficiency. Finally, on YOLOv8, we achieve a 46.1% MACs reduction by removing 55.4\% of the parameters, with an increase of 3% in the mean Absolute Precision ($mAP_{50-95}$) for object detection on COCO dataset.
Similar Papers
Compressing CNN models for resource-constrained systems by channel and layer pruning
Machine Learning (CS)
Makes smart computer programs smaller and faster.
C-SWAP: Explainability-Aware Structured Pruning for Efficient Neural Networks Compression
CV and Pattern Recognition
Makes computer "brains" smaller without losing smarts.
Hyperflux: Pruning Reveals the Importance of Weights
Machine Learning (Stat)
Makes smart computer programs smaller and faster.