Score: 1

Hybrid Convolution and Vision Transformer NAS Search Space for TinyML Image Classification

Published: November 4, 2025 | arXiv ID: 2511.02992v1

By: Mikhael Djajapermana , Moritz Reiber , Daniel Mueller-Gritschneder and more

Potential Business Impact:

Makes tiny computers recognize pictures faster.

Business Areas:
Image Recognition Data and Analytics, Software

Hybrids of Convolutional Neural Network (CNN) and Vision Transformer (ViT) have outperformed pure CNN or ViT architecture. However, since these architectures require large parameters and incur large computational costs, they are unsuitable for tinyML deployment. This paper introduces a new hybrid CNN-ViT search space for Neural Architecture Search (NAS) to find efficient hybrid architectures for image classification. The search space covers hybrid CNN and ViT blocks to learn local and global information, as well as the novel Pooling block of searchable pooling layers for efficient feature map reduction. Experimental results on the CIFAR10 dataset show that our proposed search space can produce hybrid CNN-ViT architectures with superior accuracy and inference speed to ResNet-based tinyML models under tight model size constraints.

Country of Origin
🇩🇪 🇦🇹 Germany, Austria

Page Count
12 pages

Category
Computer Science:
CV and Pattern Recognition