GrowTAS: Progressive Expansion from Small to Large Subnets for Efficient ViT Architecture Search
By: Hyunju Lee , Youngmin Oh , Jeimin Jeon and more
Potential Business Impact:
Finds best computer vision designs faster.
Transformer architecture search (TAS) aims to automatically discover efficient vision transformers (ViTs), reducing the need for manual design. Existing TAS methods typically train an over-parameterized network (i.e., a supernet) that encompasses all candidate architectures (i.e., subnets). However, all subnets share the same set of weights, which leads to interference that degrades the smaller subnets severely. We have found that well-trained small subnets can serve as a good foundation for training larger ones. Motivated by this, we propose a progressive training framework, dubbed GrowTAS, that begins with training small subnets and incorporate larger ones gradually. This enables reducing the interference and stabilizing a training process. We also introduce GrowTAS+ that fine-tunes a subset of weights only to further enhance the performance of large subnets. Extensive experiments on ImageNet and several transfer learning benchmarks, including CIFAR-10/100, Flowers, CARS, and INAT-19, demonstrate the effectiveness of our approach over current TAS methods
Similar Papers
Progressive Supernet Training for Efficient Visual Autoregressive Modeling
CV and Pattern Recognition
Makes AI image creation faster and use less memory.
Accelerating Vision Transformers with Adaptive Patch Sizes
CV and Pattern Recognition
Makes computer vision faster by changing picture piece sizes.
ScaleNet: Scaling up Pretrained Neural Networks with Incremental Parameters
CV and Pattern Recognition
Makes computer vision models learn faster and better.