Hardware-Aware Neural Network Compilation with Learned Optimization: A RISC-V Accelerator Approach
By: Ravindra Ganti, Steve Xu
Potential Business Impact:
Makes computer chips run faster and use less power.
We present XgenSilicon ML Compiler, a fully automated end-to-end compilation framework that transforms high-level machine learning models into optimized RISC-V assembly code for custom ASIC accelerators. By unifying the system's cost model across software and hardware, the compiler achieves significant improvements in Power, Performance, and Area (PPA) metrics compared to standard off-the-shelf components and hand-designed chips through five key innovations: (1) a multi-algorithm auto-tuning framework with five search strategies (Bayesian Optimization, Genetic Algorithm, Simulated Annealing, Random Search, Grid Search) combined with a learned cost model, (2) an integrated quantization framework supporting extreme precisions from FP32 to Binary with full KL divergence calibration (2048-bin histogram optimization) and momentum-based QAT gradient updates, (3) hardware-aware validation ensuring 100 percent ISA compliance and memory constraint satisfaction, (4) dynamic shape support with multi-configuration specialization, and (5) advanced cache-aware cost modeling with multi-level cache hierarchy analysis. Our evaluation demonstrates that ASICs produced by this compiler achieve 2.5-4.5x better performance, 3-6x lower power consumption, and 40-60 percent area reduction compared to baseline implementations. The compiler supports more than 100 ONNX operators across 12 categories, implements advanced RISC-V Vector optimizations, and generates hardware-validated assembly code suitable for direct ASIC synthesis. All compilation steps are fully automated, requiring zero manual intervention from model input to ASIC-ready output.
Similar Papers
Accelerating GenAI Workloads by Enabling RISC-V Microkernel Support in IREE
Hardware Architecture
Makes AI run faster on small computers.
From PyTorch to Calyx: An Open-Source Compiler Toolchain for ML Accelerators
Hardware Architecture
Turns AI code into computer chips.
FPGA-Accelerated RISC-V ISA Extensions for Efficient Neural Network Inference on Edge Devices
Hardware Architecture
Makes smart devices run faster and use less power.