An LLVM-Based Optimization Pipeline for SPDZ
By: Tianye Dai, Hammurabi Mendes, Heuichan Lim
Potential Business Impact:
Makes secret computer math much faster and easier.
Actively secure arithmetic MPC is now practical for real applications, but performance and usability are still limited by framework-specific compilation stacks, the need for programmers to explicitly express parallelism, and high communication overhead. We design and implement a proof-of-concept LLVM-based optimization pipeline for the SPDZ protocol that addresses these bottlenecks. Our front end accepts a subset of C with lightweight privacy annotations and lowers it to LLVM IR, allowing us to reuse mature analyses and transformations to automatically batch independent arithmetic operations. Our back end performs data-flow and control-flow analysis on the optimized IR to drive a non-blocking runtime scheduler that overlaps independent operations and aggressively overlaps communication with computation; when enabled, it can map batched operations to GPU kernels. This design preserves a low learning curve by using a mainstream language and hiding optimization and hardware-specific mechanics from programmers. We evaluate the system on controlled microbenchmarks against MP-SPDZ, focusing on online phase performance. Our CPU back end achieves up to 5.56 times speedup under intermediate and heavy algebraic workloads, shows strong scaling with thread count, and our GPU back end scales better as the input size increases. Overall, these results indicate that leveraging LLVM with protocol-aware scheduling is an effective architectural direction for extracting parallelism without sacrificing usability.
Similar Papers
Evaluating Compiler Optimization Impacts on zkVM Performance
Performance
Makes computer proofs run much faster.
Robust and Verifiable MPC with Applications to Linear Machine Learning Inference
Cryptography and Security
Finds bad guys in secret computer math.
A Configurable Mixed-Precision Fused Dot Product Unit for GPGPU Tensor Computation
Hardware Architecture
Speeds up AI learning by combining math types.