Score: 2

Cross-attention Secretly Performs Orthogonal Alignment in Recommendation Models

Published: October 10, 2025 | arXiv ID: 2510.09435v1

By: Hyunin Lee , Yong Zhang , Hoang Vu Nguyen and more

BigTech Affiliations: Meta University of California, Berkeley

Potential Business Impact:

Helps computers learn better by finding new information.

Business Areas:
Semantic Search Internet Services

Cross-domain sequential recommendation (CDSR) aims to align heterogeneous user behavior sequences collected from different domains. While cross-attention is widely used to enhance alignment and improve recommendation performance, its underlying mechanism is not fully understood. Most researchers interpret cross-attention as residual alignment, where the output is generated by removing redundant and preserving non-redundant information from the query input by referencing another domain data which is input key and value. Beyond the prevailing view, we introduce Orthogonal Alignment, a phenomenon in which cross-attention discovers novel information that is not present in the query input, and further argue that those two contrasting alignment mechanisms can co-exist in recommendation models We find that when the query input and output of cross-attention are orthogonal, model performance improves over 300 experiments. Notably, Orthogonal Alignment emerges naturally, without any explicit orthogonality constraints. Our key insight is that Orthogonal Alignment emerges naturally because it improves scaling law. We show that baselines additionally incorporating cross-attention module outperform parameter-matched baselines, achieving a superior accuracy-per-model parameter. We hope these findings offer new directions for parameter-efficient scaling in multi-modal research.

Country of Origin
🇺🇸 United States


Page Count
19 pages

Category
Computer Science:
Machine Learning (CS)