How to Marginalize in Causal Structure Learning?
By: William Zhao, Guy Van den Broeck, Benjie Wang
Potential Business Impact:
Finds hidden patterns in data faster.
Bayesian networks (BNs) are a widely used class of probabilistic graphical models employed in numerous application domains. However, inferring the network's graphical structure from data remains challenging. Bayesian structure learners approach this problem by inferring a posterior distribution over the possible directed acyclic graphs underlying the BN. The inference process often requires marginalizing over probability distributions, which is typically done using dynamic programming methods that restrict the set of possible parents for each node. Instead, we present a novel method that utilizes tractable probabilistic circuits to circumvent this restriction. This method utilizes a new learning routine that trains these circuits on both the original distribution and marginal queries. The architecture of probabilistic circuits then inherently allows for fast and exact marginalization on the learned distribution. We then show empirically that utilizing our method to answer marginals allows Bayesian structure learners to improve their performance compared to current methods.
Similar Papers
Scalable Bayesian Network Structure Learning Using Tsetlin Machine to Constrain the Search Space
Machine Learning (CS)
Finds causes faster for big problems.
A PC Algorithm for Max-Linear Bayesian Networks
Machine Learning (Stat)
Finds hidden connections in data with weird patterns.
RLBayes: a Bayesian Network Structure Learning Algorithm via Reinforcement Learning-Based Search Strategy
Machine Learning (CS)
Teaches computers to find best cause-and-effect maps.