nncase: An End-to-End Compiler for Efficient LLM Deployment on Heterogeneous Storage Architectures
By: Hui Guo , Qihang Zheng , Chenghai Huo and more
The efficient deployment of large language models (LLMs) is hindered by memory architecture heterogeneity, where traditional compilers suffer from fragmented workflows and high adaptation costs. We present nncase, an open-source, end-to-end compilation framework designed to unify optimization across diverse targets. Central to nncase is an e-graph-based term rewriting engine that mitigates the phase ordering problem, enabling global exploration of computation and data movement strategies. The framework integrates three key modules: Auto Vectorize for adapting to heterogeneous computing units, Auto Distribution for searching parallel strategies with cost-aware communication optimization, and Auto Schedule for maximizing on-chip cache locality. Furthermore, a buffer-aware Codegen phase ensures efficient kernel instantiation. Evaluations show that nncase outperforms mainstream frameworks like MLC LLM and Intel IPEX on Qwen3 series models and achieves performance comparable to the hand-optimized llama.cpp on CPUs, demonstrating the viability of automated compilation for high-performance LLM deployment. The source code is available at https://github.com/kendryte/nncase.
Similar Papers
Small Language Models as Compiler Experts: Auto-Parallelization for Heterogeneous Systems
Machine Learning (CS)
Makes computers run programs much faster.
LLM Inference Beyond a Single Node: From Bottlenecks to Mitigations with Fast All-Reduce Communication
Distributed, Parallel, and Cluster Computing
Makes giant AI models run much faster.
Exploring the Feasibility of End-to-End Large Language Model as a Compiler
Machine Learning (CS)
Makes computers write code for other computers.