Hybrid Convolution and Vision Transformer NAS Search Space for TinyML Image Classification
By: Mikhael Djajapermana , Moritz Reiber , Daniel Mueller-Gritschneder and more
Potential Business Impact:
Makes tiny computers recognize pictures faster.
Hybrids of Convolutional Neural Network (CNN) and Vision Transformer (ViT) have outperformed pure CNN or ViT architecture. However, since these architectures require large parameters and incur large computational costs, they are unsuitable for tinyML deployment. This paper introduces a new hybrid CNN-ViT search space for Neural Architecture Search (NAS) to find efficient hybrid architectures for image classification. The search space covers hybrid CNN and ViT blocks to learn local and global information, as well as the novel Pooling block of searchable pooling layers for efficient feature map reduction. Experimental results on the CIFAR10 dataset show that our proposed search space can produce hybrid CNN-ViT architectures with superior accuracy and inference speed to ResNet-based tinyML models under tight model size constraints.
Similar Papers
Hands-on Evaluation of Visual Transformers for Object Recognition and Detection
CV and Pattern Recognition
Helps computers see the whole picture, not just parts.
Powerful Design of Small Vision Transformer on CIFAR10
Machine Learning (CS)
Makes AI work better on small amounts of data.
CNN and ViT Efficiency Study on Tiny ImageNet and DermaMNIST Datasets
CV and Pattern Recognition
Makes AI see pictures faster and with less power.