Score: 2

LIME: Link-based user-item Interaction Modeling with decoupled xor attention for Efficient test time scaling

Published: October 21, 2025 | arXiv ID: 2510.18239v2

By: Yunjiang Jiang , Ayush Agarwal , Yang Liu and more

BigTech Affiliations: Meta

Potential Business Impact:

Recommends things faster, even with lots of choices.

Business Areas:
Semantic Search Internet Services

Scaling large recommendation systems requires advancing three major frontiers: processing longer user histories, expanding candidate sets, and increasing model capacity. While promising, transformers' computational cost scales quadratically with the user sequence length and linearly with the number of candidates. This trade-off makes it prohibitively expensive to expand candidate sets or increase sequence length at inference, despite the significant performance improvements. We introduce \textbf{LIME}, a novel architecture that resolves this trade-off. Through two key innovations, LIME fundamentally reduces computational complexity. First, low-rank ``link embeddings" enable pre-computation of attention weights by decoupling user and candidate interactions, making the inference cost nearly independent of candidate set size. Second, a linear attention mechanism, \textbf{LIME-XOR}, reduces the complexity with respect to user sequence length from quadratic ($O(N^2)$) to linear ($O(N)$). Experiments on public and industrial datasets show LIME achieves near-parity with state-of-the-art transformers but with a 10$\times$ inference speedup on large candidate sets or long sequence lengths. When tested on a major recommendation platform, LIME improved user engagement while maintaining minimal inference costs with respect to candidate set size and user history length, establishing a new paradigm for efficient and expressive recommendation systems.

Country of Origin
🇺🇸 United States

Page Count
16 pages

Category
Computer Science:
Information Retrieval