Score: 1

OnePiece: The Great Route to Generative Recommendation -- A Case Study from Tencent Algorithm Competition

Published: December 8, 2025 | arXiv ID: 2512.07424v1

By: Jiangxia Cao , Shuo Yang , Zijun Wang and more

Potential Business Impact:

Makes computer recommendations learn and improve faster.

Business Areas:
Personalization Commerce and Shopping

In past years, the OpenAI's Scaling-Laws shows the amazing intelligence with the next-token prediction paradigm in neural language modeling, which pointing out a free-lunch way to enhance the model performance by scaling the model parameters. In RecSys, the retrieval stage is also follows a 'next-token prediction' paradigm, to recall the hunderds of items from the global item set, thus the generative recommendation usually refers specifically to the retrieval stage (without Tree-based methods). This raises a philosophical question: without a ground-truth next item, does the generative recommendation also holds a potential scaling law? In retrospect, the generative recommendation has two different technique paradigms: (1) ANN-based framework, utilizing the compressed user embedding to retrieve nearest other items in embedding space, e.g, Kuaiformer. (2) Auto-regressive-based framework, employing the beam search to decode the item from whole space, e.g, OneRec. In this paper, we devise a unified encoder-decoder framework to validate their scaling-laws at same time. Our empirical finding is that both of their losses strictly adhere to power-law Scaling Laws ($R^2$>0.9) within our unified architecture.

Repos / Data Links

Page Count
5 pages

Category
Computer Science:
Information Retrieval