Coordinate Descent for Network Linearization
By: Vlad Rakhlin, Amir Jevnisek, Shai Avidan
Potential Business Impact:
Makes AI faster by cutting down on calculations.
ReLU activations are the main bottleneck in Private Inference that is based on ResNet networks. This is because they incur significant inference latency. Reducing ReLU count is a discrete optimization problem, and there are two common ways to approach it. Most current state-of-the-art methods are based on a smooth approximation that jointly optimizes network accuracy and ReLU budget at once. However, the last hard thresholding step of the optimization usually introduces a large performance loss. We take an alternative approach that works directly in the discrete domain by leveraging Coordinate Descent as our optimization framework. In contrast to previous methods, this yields a sparse solution by design. We demonstrate, through extensive experiments, that our method is State of the Art on common benchmarks.
Similar Papers
Complexity of One-Dimensional ReLU DNNs
Machine Learning (CS)
Makes AI understand patterns with fewer parts.
On the Complexity-Faithfulness Trade-off of Gradient-Based Explanations
Machine Learning (CS)
Makes AI explanations clearer and more trustworthy.
Parameterized Hardness of Zonotope Containment and Neural Network Verification
Computational Complexity
Makes AI harder to check for mistakes.