Score: 0

Batched Training for QLSTM vs. QFWP: A System-Oriented Approach to EPC-Aware RMSE-DA

Published: December 26, 2025 | arXiv ID: 2512.21820v1

By: Jun-Hao Chen , Ming-Kai Hung , Yun-Cheng Tsai and more

Potential Business Impact:

Helps predict money changes faster and better.

Business Areas:
A/B Testing Data and Analytics

We compare two quantum sequence models, QLSTM and QFWP, under an Equal Parameter Count (EPC) and adjoint differentiation setup on daily EUR USD forecasting as a controlled one dimensional time series case study. Across 10 random seeds and batch sizes from 4 to 64, we measure component wise runtimes including train forward, backward, full train, and inference, as well as accuracy including RMSE and directional accuracy. Batched forward scales well by about 2.2 to 2.4 times, but backward scales modestly, with QLSTM about 1.01 to 1.05 times and QFWP about 1.18 to 1.22 times, which caps end to end training speedups near 2 times. QFWP achieves lower RMSE and higher directional accuracy at all batch sizes, supported by a Wilcoxon test with p less than or equal to 0.004 and a large Cliff delta, while QLSTM reaches the highest throughput at batch size 64, revealing a clear speed accuracy Pareto frontier. We provide an EPC aligned, numerically checked benchmarking pipeline and practical guidance on batch size choices, while broader datasets and hardware and noise settings are left for future work.

Page Count
5 pages

Category
Computer Science:
Computational Engineering, Finance, and Science