FastMPS: Revisit Data Parallel in Large-scale Matrix Product State Sampling
By: Yaojian Chen , Si-Qiu Gong , Lin Gan and more
Matrix Product State (MPS) is a versatile tensor network representation widely applied in quantum physics, quantum chemistry, and machine learning, etc. MPS sampling serves as a critical fundamental operation in these fields. As the problems become more complex, the scale of MPS is rapidly increasing. Traditional data parallelism is limited by memory and heavy I/O in large-scale MPS. Model parallelism that can handle large-scale MPS imposes rigid process bindings and lacks scalability. This work proposes Fast-MPS, a multi-level parallel framework for scalable MPS sampling. Our design combines data parallelism across samples with tensor parallelism along bond dimensions. We eliminate memory and I/O pressure through compression and overlapping, and revive data parallel in large-scale MPS sampling. We evaluate our approach on Gaussian Boson Sampling, a representative and demanding application. Fast-MPS achieves over 10x speedup compared to existing simulators, scales to thousands of processes, and enables simulations with 8,176 sites and bond dimension chi = 10^4, significantly outperforming the state of the art. Fast-MPS has demonstrated great potential in high-performance tensor network applications.
Similar Papers
Harnessing CUDA-Q's MPS for Tensor Network Simulations of Large-Scale Quantum Circuits
Quantum Physics
Simulates big quantum computers on regular computers.
Initialization and training of matrix product state probabilistic models
Numerical Analysis
Improves AI learning by fixing a common mistake.
Successive randomized compression: A randomized algorithm for the compressed MPO-MPS product
Quantum Physics
Makes computers solve hard science problems faster.