Unfolded Laplacian Spectral Embedding: A Theoretically Grounded Approach to Dynamic Network Representation
By: Haruka Ezoe, Hiroki Matsumoto, Ryohei Hisano
Potential Business Impact:
Helps AI understand changing connections better.
Dynamic relational structures play a central role in many AI tasks, but their evolving nature presents challenges for consistent and interpretable representation. A common approach is to learn time-varying node embeddings, whose effectiveness depends on satisfying key stability properties. In this paper, we propose Unfolded Laplacian Spectral Embedding, a new method that extends the Unfolded Adjacency Spectral Embedding framework to normalized Laplacians while preserving both cross-sectional and longitudinal stability. We provide formal proof that our method satisfies these stability conditions. In addition, as a bonus of using the Laplacian matrix, we establish a new Cheeger-style inequality that connects the embeddings to the conductance of the underlying dynamic graphs. Empirical evaluations on synthetic and real-world datasets support our theoretical findings and demonstrate the strong performance of our method. These results establish a principled and stable framework for dynamic network representation grounded in spectral graph theory.
Similar Papers
Resolving Node Identifiability in Graph Neural Processes via Laplacian Spectral Encodings
Machine Learning (CS)
Helps computers understand complex connections better.
Towards Robust DeepFake Detection under Unstable Face Sequences: Adaptive Sparse Graph Embedding with Order-Free Representation and Explicit Laplacian Spectral Prior
CV and Pattern Recognition
Finds fake videos even when they are messy.
Spectral Neural Graph Sparsification
Machine Learning (CS)
Makes computer models of networks faster and better.