Sparse Probabilistic Graph Circuits
By: Martin Rektoris , Milan Papež , Václav Šmídl and more
Potential Business Impact:
Builds better drugs by understanding molecule connections.
Deep generative models (DGMs) for graphs achieve impressively high expressive power thanks to very efficient and scalable neural networks. However, these networks contain non-linearities that prevent analytical computation of many standard probabilistic inference queries, i.e., these DGMs are considered \emph{intractable}. While recently proposed Probabilistic Graph Circuits (PGCs) address this issue by enabling \emph{tractable} probabilistic inference, they operate on dense graph representations with $\mathcal{O}(n^2)$ complexity for graphs with $n$ nodes and \emph{$m$ edges}. To address this scalability issue, we introduce Sparse PGCs, a new class of tractable generative models that operate directly on sparse graph representation, reducing the complexity to $\mathcal{O}(n + m)$, which is particularly beneficial for $m \ll n^2$. In the context of de novo drug design, we empirically demonstrate that SPGCs retain exact inference capabilities, improve memory efficiency and inference speed, and match the performance of intractable DGMs in key metrics.
Similar Papers
Probabilistic Graph Circuits: Deep Generative Models for Tractable Probabilistic Inference over Graphs
Machine Learning (CS)
Lets computers understand and create complex patterns exactly.
Simplicial Gaussian Models: Representation and Inference
Machine Learning (Stat)
Models complex connections in data, not just pairs.
How do Probabilistic Graphical Models and Graph Neural Networks Look at Network Data?
Machine Learning (Stat)
Helps computers understand connections better than before.