From PyTorch to Calyx: An Open-Source Compiler Toolchain for ML Accelerators
By: Jiahan Xie, Evan Williams, Adrian Sampson
Potential Business Impact:
Turns AI code into computer chips.
We present an end-to-end open-source compiler toolchain that targets synthesizable SystemVerilog from ML models written in PyTorch. Our toolchain leverages the accelerator design language Allo, the hardware intermediate representation (IR) Calyx, and the CIRCT project under LLVM. We also implement a set of compiler passes for memory partitioning, enabling effective parallelism in memory-intensive ML workloads. Experimental results demonstrate that our compiler can effectively generate optimized FPGA-implementable hardware designs that perform reasonably well against closed-source industry-grade tools such as Vitis HLS.
Similar Papers
Hardware-Aware Neural Network Compilation with Learned Optimization: A RISC-V Accelerator Approach
Hardware Architecture
Makes computer chips run faster and use less power.
hls4ml: A Flexible, Open-Source Platform for Deep Learning Acceleration on Reconfigurable Hardware
Hardware Architecture
Makes smart computer programs run super fast.
Hardware.jl - An MLIR-based Julia HLS Flow (Work in Progress)
Software Engineering
Makes computer chips faster for science programs.