Score: 0

Learning from Historical Activations in Graph Neural Networks

Published: January 3, 2026 | arXiv ID: 2601.01123v1

By: Yaniv Galron , Hadar Sinai , Haggai Maron and more

Potential Business Impact:

**Uses node history to improve graph predictions.**

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

Graph Neural Networks (GNNs) have demonstrated remarkable success in various domains such as social networks, molecular chemistry, and more. A crucial component of GNNs is the pooling procedure, in which the node features calculated by the model are combined to form an informative final descriptor to be used for the downstream task. However, previous graph pooling schemes rely on the last GNN layer features as an input to the pooling or classifier layers, potentially under-utilizing important activations of previous layers produced during the forward pass of the model, which we regard as historical graph activations. This gap is particularly pronounced in cases where a node's representation can shift significantly over the course of many graph neural layers, and worsened by graph-specific challenges such as over-smoothing in deep architectures. To bridge this gap, we introduce HISTOGRAPH, a novel two-stage attention-based final aggregation layer that first applies a unified layer-wise attention over intermediate activations, followed by node-wise attention. By modeling the evolution of node representations across layers, our HISTOGRAPH leverages both the activation history of nodes and the graph structure to refine features used for final prediction. Empirical results on multiple graph classification benchmarks demonstrate that HISTOGRAPH offers strong performance that consistently improves traditional techniques, with particularly strong robustness in deep GNNs.

Country of Origin
🇮🇱 Israel

Page Count
21 pages

Category
Computer Science:
Machine Learning (CS)