OptiProxy-NAS: Optimization Proxy based End-to-End Neural Architecture Search
By: Bo Lyu , Yu Cui , Tuo Shi and more
Potential Business Impact:
Finds best computer brains faster and cheaper.
Neural architecture search (NAS) is a hard computationally expensive optimization problem with a discrete, vast, and spiky search space. One of the key research efforts dedicated to this space focuses on accelerating NAS via certain proxy evaluations of neural architectures. Different from the prevalent predictor-based methods using surrogate models and differentiable architecture search via supernetworks, we propose an optimization proxy to streamline the NAS as an end-to-end optimization framework, named OptiProxy-NAS. In particular, using a proxy representation, the NAS space is reformulated to be continuous, differentiable, and smooth. Thereby, any differentiable optimization method can be applied to the gradient-based search of the relaxed architecture parameters. Our comprehensive experiments on $12$ NAS tasks of $4$ search spaces across three different domains including computer vision, natural language processing, and resource-constrained NAS fully demonstrate the superior search results and efficiency. Further experiments on low-fidelity scenarios verify the flexibility.
Similar Papers
HyperNAS: Enhancing Architecture Representation for NAS Predictor via Hypernetwork
Machine Learning (CS)
Finds best computer brain designs faster.
A Continuous Encoding-Based Representation for Efficient Multi-Fidelity Multi-Objective Neural Architecture Search
Machine Learning (CS)
Finds best computer designs faster for complex jobs.
ONNX-Net: Towards Universal Representations and Instant Performance Prediction for Neural Architectures
Machine Learning (CS)
Tests computer brains instantly, no matter their design.