The Robustness of Differentiable Causal Discovery in Misspecified Scenarios
By: Huiyang Yi , Yanyan He , Duxin Chen and more
Potential Business Impact:
Makes computers understand cause and effect better.
Causal discovery aims to learn causal relationships between variables from targeted data, making it a fundamental task in machine learning. However, causal discovery algorithms often rely on unverifiable causal assumptions, which are usually difficult to satisfy in real-world data, thereby limiting the broad application of causal discovery in practical scenarios. Inspired by these considerations, this work extensively benchmarks the empirical performance of various mainstream causal discovery algorithms, which assume i.i.d. data, under eight model assumption violations. Our experimental results show that differentiable causal discovery methods exhibit robustness under the metrics of Structural Hamming Distance and Structural Intervention Distance of the inferred graphs in commonly used challenging scenarios, except for scale variation. We also provide the theoretical explanations for the performance of differentiable causal discovery methods. Finally, our work aims to comprehensively benchmark the performance of recent differentiable causal discovery methods under model assumption violations, and provide the standard for reasonable evaluation of causal discovery, as well as to further promote its application in real-world scenarios.
Similar Papers
Robust Causal Discovery under Imperfect Structural Constraints
Machine Learning (CS)
Finds true causes even with bad clues.
Differentiable Constraint-Based Causal Discovery
Machine Learning (CS)
Finds causes even with little information.
Differentiable Cyclic Causal Discovery Under Unmeasured Confounders
Machine Learning (CS)
Finds hidden causes even with missing information.