GraphTARIF: Linear Graph Transformer with Augmented Rank and Improved Focus
By: Zhaolin Hu , Kun Li , Hehe Fan and more
Potential Business Impact:
Makes AI understand complex data better.
Linear attention mechanisms have emerged as efficient alternatives to full self-attention in Graph Transformers, offering linear time complexity. However, existing linear attention models often suffer from a significant drop in expressiveness due to low-rank projection structures and overly uniform attention distributions. We theoretically prove that these properties reduce the class separability of node representations, limiting the model's classification ability. To address this, we propose a novel hybrid framework that enhances both the rank and focus of attention. Specifically, we enhance linear attention by attaching a gated local graph network branch to the value matrix, thereby increasing the rank of the resulting attention map. Furthermore, to alleviate the excessive smoothing effect inherent in linear attention, we introduce a learnable log-power function into the attention scores to reduce entropy and sharpen focus. We theoretically show that this function decreases entropy in the attention distribution, enhancing the separability of learned embeddings. Extensive experiments on both homophilic and heterophilic graph benchmarks demonstrate that our method achieves competitive performance while preserving the scalability of linear attention.
Similar Papers
Learning Advanced Self-Attention for Linear Transformers in the Singular Value Domain
Machine Learning (CS)
Helps computers understand complex patterns better.
Unifying and Enhancing Graph Transformers via a Hierarchical Mask Framework
CV and Pattern Recognition
Helps computers understand complex connections better.
Attention Beyond Neighborhoods: Reviving Transformer for Graph Clustering
Machine Learning (CS)
Helps computers group similar things by looking at connections.