RAM-NAS: Resource-aware Multiobjective Neural Architecture Search Method for Robot Vision Tasks
By: Shouren Mao , Minghao Qin , Wei Dong and more
Potential Business Impact:
Makes robot brains faster and smarter on devices.
Neural architecture search (NAS) has shown great promise in automatically designing lightweight models. However, conventional approaches are insufficient in training the supernet and pay little attention to actual robot hardware resources. To meet such challenges, we propose RAM-NAS, a resource-aware multi-objective NAS method that focuses on improving the supernet pretrain and resource-awareness on robot hardware devices. We introduce the concept of subnets mutual distillation, which refers to mutually distilling all subnets sampled by the sandwich rule. Additionally, we utilize the Decoupled Knowledge Distillation (DKD) loss to enhance logits distillation performance. To expedite the search process with consideration for hardware resources, we used data from three types of robotic edge hardware to train Latency Surrogate predictors. These predictors facilitated the estimation of hardware inference latency during the search phase, enabling a unified multi-objective evolutionary search to balance model accuracy and latency trade-offs. Our discovered model family, RAM-NAS models, can achieve top-1 accuracy ranging from 76.7% to 81.4% on ImageNet. In addition, the resource-aware multi-objective NAS we employ significantly reduces the model's inference latency on edge hardware for robots. We conducted experiments on downstream tasks to verify the scalability of our methods. The inference time for detection and segmentation is reduced on all three hardware types compared to MobileNetv3-based methods. Our work fills the gap in NAS for robot hardware resource-aware.
Similar Papers
A Continuous Encoding-Based Representation for Efficient Multi-Fidelity Multi-Objective Neural Architecture Search
Machine Learning (CS)
Finds best computer designs faster for complex jobs.
Subnet-Aware Dynamic Supernet Training for Neural Architecture Search
CV and Pattern Recognition
Makes AI design itself faster and better.
HyperNAS: Enhancing Architecture Representation for NAS Predictor via Hypernetwork
Machine Learning (CS)
Finds best computer brain designs faster.