Enhanced Pruning Strategy for Multi-Component Neural Architectures Using Component-Aware Graph Analysis
By: Ganesh Sundaram, Jonas Ulmen, Daniel Görges
Potential Business Impact:
Makes big computer brains smaller without losing smarts.
Deep neural networks (DNNs) deliver outstanding performance, but their complexity often prohibits deployment in resource-constrained settings. Comprehensive structured pruning frameworks based on parameter dependency analysis reduce model size with specific regard to computational performance. When applying them to Multi-Component Neural Architectures (MCNAs), they risk network integrity by removing large parameter groups. We introduce a component-aware pruning strategy, extending dependency graphs to isolate individual components and inter-component flows. This creates smaller, targeted pruning groups that conserve functional integrity. Demonstrated effectively on a control task, our approach achieves greater sparsity and reduced performance degradation, opening a path for optimizing complex, multi-component DNNs efficiently.
Similar Papers
Application-Specific Component-Aware Structured Pruning of Deep Neural Networks via Soft Coefficient Optimization
Machine Learning (CS)
Makes smart computer programs smaller, still work well.
Exploring Neural Network Pruning with Screening Methods
Machine Learning (CS)
Makes smart computer programs run faster on phones.
Adaptive Pruning of Deep Neural Networks for Resource-Aware Embedded Intrusion Detection on the Edge
Machine Learning (CS)
Makes computer security programs smaller and faster.