Pruning as a Game: Equilibrium-Driven Sparsification of Neural Networks
By: Zubair Shah, Noaman Khan
Neural network pruning is widely used to reduce model size and computational cost. Yet, most existing methods treat sparsity as an externally imposed constraint, enforced through heuristic importance scores or training-time regularization. In this work, we propose a fundamentally different perspective: pruning as an equilibrium outcome of strategic interaction among model components. We model parameter groups such as weights, neurons, or filters as players in a continuous non-cooperative game, where each player selects its level of participation in the network to balance contribution against redundancy and competition. Within this formulation, sparsity emerges naturally when continued participation becomes a dominated strategy at equilibrium. We analyze the resulting game and show that dominated players collapse to zero participation under mild conditions, providing a principled explanation for pruning behavior. Building on this insight, we derive a simple equilibrium-driven pruning algorithm that jointly updates network parameters and participation variables without relying on explicit importance scores. This work focuses on establishing a principled formulation and empirical validation of pruning as an equilibrium phenomenon, rather than exhaustive architectural or large-scale benchmarking. Experiments on standard benchmarks demonstrate that the proposed approach achieves competitive sparsity-accuracy trade-offs while offering an interpretable, theory-grounded alternative to existing pruning methods.
Similar Papers
Pruning Everything, Everywhere, All at Once
CV and Pattern Recognition
Makes smart computer programs smaller and faster.
Resting Neurons, Active Insights: Improving Input Sparsification for Large Language Models
Machine Learning (CS)
Makes big computer brains work better and faster.
Teacher-Guided One-Shot Pruning via Context-Aware Knowledge Distillation
CV and Pattern Recognition
Makes computer programs smaller without losing quality.