Subnet-Aware Dynamic Supernet Training for Neural Architecture Search
By: Jeimin Jeon , Youngmin Oh , Junghyup Lee and more
Potential Business Impact:
Makes AI design itself faster and better.
N-shot neural architecture search (NAS) exploits a supernet containing all candidate subnets for a given search space. The subnets are typically trained with a static training strategy (e.g., using the same learning rate (LR) scheduler and optimizer for all subnets). This, however, does not consider that individual subnets have distinct characteristics, leading to two problems: (1) The supernet training is biased towards the low-complexity subnets (unfairness); (2) the momentum update in the supernet is noisy (noisy momentum). We present a dynamic supernet training technique to address these problems by adjusting the training strategy adaptive to the subnets. Specifically, we introduce a complexity-aware LR scheduler (CaLR) that controls the decay ratio of LR adaptive to the complexities of subnets, which alleviates the unfairness problem. We also present a momentum separation technique (MS). It groups the subnets with similar structural characteristics and uses a separate momentum for each group, avoiding the noisy momentum problem. Our approach can be applicable to various N-shot NAS methods with marginal cost, while improving the search performance drastically. We validate the effectiveness of our approach on various search spaces (e.g., NAS-Bench-201, Mobilenet spaces) and datasets (e.g., CIFAR-10/100, ImageNet).
Similar Papers
RAM-NAS: Resource-aware Multiobjective Neural Architecture Search Method for Robot Vision Tasks
Robotics
Makes robot brains faster and smarter on devices.
Meta knowledge assisted Evolutionary Neural Architecture Search
Neural and Evolutionary Computing
Finds best computer brains faster and cheaper.
Deep Hierarchical Learning with Nested Subspace Networks
Machine Learning (CS)
Lets one smart computer program use less power.