Score: 0

How to Marginalize in Causal Structure Learning?

Published: November 18, 2025 | arXiv ID: 2511.14001v1

By: William Zhao, Guy Van den Broeck, Benjie Wang

Potential Business Impact:

Finds hidden patterns in data faster.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Bayesian networks (BNs) are a widely used class of probabilistic graphical models employed in numerous application domains. However, inferring the network's graphical structure from data remains challenging. Bayesian structure learners approach this problem by inferring a posterior distribution over the possible directed acyclic graphs underlying the BN. The inference process often requires marginalizing over probability distributions, which is typically done using dynamic programming methods that restrict the set of possible parents for each node. Instead, we present a novel method that utilizes tractable probabilistic circuits to circumvent this restriction. This method utilizes a new learning routine that trains these circuits on both the original distribution and marginal queries. The architecture of probabilistic circuits then inherently allows for fast and exact marginalization on the learned distribution. We then show empirically that utilizing our method to answer marginals allows Bayesian structure learners to improve their performance compared to current methods.

Country of Origin
🇺🇸 United States

Page Count
8 pages

Category
Computer Science:
Machine Learning (CS)