Tensor State Space-based Dynamic Multilayer Network Modeling
By: Tian Lan, Jie Guo, Chen Zhang
Potential Business Impact:
Helps understand how things change over time.
Understanding the complex interactions within dynamic multilayer networks is critical for advancements in various scientific domains. Existing models often fail to capture such networks' temporal and cross-layer dynamics. This paper introduces a novel Tensor State Space Model for Dynamic Multilayer Networks (TSSDMN), utilizing a latent space model framework. TSSDMN employs a symmetric Tucker decomposition to represent latent node features, their interaction patterns, and layer transitions. Then by fixing the latent features and allowing the interaction patterns to evolve over time, TSSDMN uniquely captures both the temporal dynamics within layers and across different layers. The model identifiability conditions are discussed. By treating latent features as variables whose posterior distributions are approximated using a mean-field variational inference approach, a variational Expectation Maximization algorithm is developed for efficient model inference. Numerical simulations and case studies demonstrate the efficacy of TSSDMN for understanding dynamic multilayer networks.
Similar Papers
From Layers to States: A State Space Model Perspective to Deep Neural Network Layer Dynamics
Machine Learning (CS)
Makes computer vision models learn better from more data.
Deep Learning-based Approaches for State Space Models: A Selective Review
Machine Learning (Stat)
Helps computers understand changing information better.
Structured State Space Model Dynamics and Parametrization for Spiking Neural Networks
Neural and Evolutionary Computing
Makes brain-like computers learn faster and better.