Score: 0

GrowTAS: Progressive Expansion from Small to Large Subnets for Efficient ViT Architecture Search

Published: December 13, 2025 | arXiv ID: 2512.12296v1

By: Hyunju Lee , Youngmin Oh , Jeimin Jeon and more

Potential Business Impact:

Finds best computer vision designs faster.

Business Areas:
Autonomous Vehicles Transportation

Transformer architecture search (TAS) aims to automatically discover efficient vision transformers (ViTs), reducing the need for manual design. Existing TAS methods typically train an over-parameterized network (i.e., a supernet) that encompasses all candidate architectures (i.e., subnets). However, all subnets share the same set of weights, which leads to interference that degrades the smaller subnets severely. We have found that well-trained small subnets can serve as a good foundation for training larger ones. Motivated by this, we propose a progressive training framework, dubbed GrowTAS, that begins with training small subnets and incorporate larger ones gradually. This enables reducing the interference and stabilizing a training process. We also introduce GrowTAS+ that fine-tunes a subset of weights only to further enhance the performance of large subnets. Extensive experiments on ImageNet and several transfer learning benchmarks, including CIFAR-10/100, Flowers, CARS, and INAT-19, demonstrate the effectiveness of our approach over current TAS methods

Page Count
10 pages

Category
Computer Science:
CV and Pattern Recognition