Score: 2

Parallel Latent Reasoning for Sequential Recommendation

Published: January 6, 2026 | arXiv ID: 2601.03153v1

By: Jiakai Tang , Xu Chen , Wen Chen and more

BigTech Affiliations: Alibaba

Potential Business Impact:

Finds what you like by trying many ideas.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Capturing complex user preferences from sparse behavioral sequences remains a fundamental challenge in sequential recommendation. Recent latent reasoning methods have shown promise by extending test-time computation through multi-step reasoning, yet they exclusively rely on depth-level scaling along a single trajectory, suffering from diminishing returns as reasoning depth increases. To address this limitation, we propose \textbf{Parallel Latent Reasoning (PLR)}, a novel framework that pioneers width-level computational scaling by exploring multiple diverse reasoning trajectories simultaneously. PLR constructs parallel reasoning streams through learnable trigger tokens in continuous latent space, preserves diversity across streams via global reasoning regularization, and adaptively synthesizes multi-stream outputs through mixture-of-reasoning-streams aggregation. Extensive experiments on three real-world datasets demonstrate that PLR substantially outperforms state-of-the-art baselines while maintaining real-time inference efficiency. Theoretical analysis further validates the effectiveness of parallel reasoning in improving generalization capability. Our work opens new avenues for enhancing reasoning capacity in sequential recommendation beyond existing depth scaling.

Country of Origin
🇨🇳 China

Page Count
12 pages

Category
Computer Science:
Information Retrieval