RecBase: Generative Foundation Model Pretraining for Zero-Shot Recommendation
By: Sashuai Zhou , Weinan Gan , Qijiong Liu and more
Potential Business Impact:
Recommends items better across different apps.
Recent advances in LLM-based recommendation have shown promise, yet their cross-domain generalization is hindered by a fundamental mismatch between language-centric pretraining and the recommendation task. Existing methods, relying on language-level knowledge, fail to capture dynamic, item-level user interests across domains. To bridge this gap, we propose RecBase, a domain-agnostic foundational model pretrained with a recommendation-oriented objective. RecBase leverages a large-scale, heterogeneous, cross-domain corpus with unified textual representations and feature mappings to enhance cross-domain generalization. To further align item semantics across domains, we introduce a unified item tokenizer that encodes items into hierarchical concept identifiers, enabling structured representation and efficient vocabulary sharing. The model is trained using an autoregressive objective to capture complex item-level sequential patterns. On eight real-world datasets, our 1.5B-parameter model matches or surpasses the performance of LLM baselines up to 7B parameters in zero-shot and cross-domain recommendation tasks.
Similar Papers
RecGPT: A Foundation Model for Sequential Recommendation
Information Retrieval
Recommends new things without learning them first.
Large Language Model Empowered Recommendation Meets All-domain Continual Pre-Training
Information Retrieval
Helps computers suggest things you'll like.
Evaluating Recabilities of Foundation Models: A Multi-Domain, Multi-Dataset Benchmark
Information Retrieval
Tests AI to recommend things better.