ZeroLM: Data-Free Transformer Architecture Search for Language Models
By: Zhen-Song Chen , Hong-Wei Ding , Xian-Jia Wang and more
Potential Business Impact:
Finds best computer brains faster and cheaper.
Neural architecture search (NAS) provides a systematic framework for automating the design of neural network architectures, yet its widespread adoption is hindered by prohibitive computational requirements. Existing zero-cost proxy methods, while reducing search overhead, demonstrate inadequate performance in architecture ranking tasks, particularly for Transformer-based models where they often underperform simple parameter counting metrics. Current automated proxy discovery approaches suffer from extended search times, susceptibility to data overfitting, and structural complexity. This paper introduces a novel zero-cost proxy methodology that quantifies model capacity through efficient weight statistics computation while decomposing Transformer architectures into functionally distinct sub-modules, thereby optimizing the balance of their contributions to overall performance. Our comprehensive evaluation demonstrates the superiority of this approach, achieving a Spearman's rho of 0.76 and Kendall's tau of 0.53 on the FlexiBERT benchmark. The proposed method exhibits exceptional computational efficiency while maintaining robust performance across diverse NAS benchmark tasks, offering a practical solution for large-scale architecture search.
Similar Papers
Transferrable Surrogates in Expressive Neural Architecture Search Spaces
Machine Learning (CS)
Finds best computer designs faster.
Dextr: Zero-Shot Neural Architecture Search with Singular Value Decomposition and Extrinsic Curvature
CV and Pattern Recognition
Finds best computer brain designs without data.
Kernel-Level Energy-Efficient Neural Architecture Search for Tabular Dataset
Machine Learning (CS)
Finds computer designs that use way less power.