How to Square Tensor Networks and Circuits Without Squaring Them
By: Lorenzo Loconte, Adrián Javaloy, Antonio Vergari
Squared tensor networks (TNs) and their extension as computational graphs--squared circuits--have been used as expressive distribution estimators, yet supporting closed-form marginalization. However, the squaring operation introduces additional complexity when computing the partition function or marginalizing variables, which hinders their applicability in ML. To solve this issue, canonical forms of TNs are parameterized via unitary matrices to simplify the computation of marginals. However, these canonical forms do not apply to circuits, as they can represent factorizations that do not directly map to a known TN. Inspired by the ideas of orthogonality in canonical forms and determinism in circuits enabling tractable maximization, we show how to parameterize squared circuits to overcome their marginalization overhead. Our parameterizations unlock efficient marginalization even in factorizations different from TNs, but encoded as circuits, whose structure would otherwise make marginalization computationally hard. Finally, our experiments on distribution estimation show how our proposed conditions in squared circuits come with no expressiveness loss, while enabling more efficient learning.
Similar Papers
A Tensor Residual Circuit Neural Network Factorized with Matrix Product Operation
Machine Learning (CS)
Makes smart computer programs work better and stronger.
Regularized second-order optimization of tensor-network Born machines
Machine Learning (CS)
Teaches computers to learn patterns faster.
Superposed Parameterised Quantum Circuits
Quantum Physics
Makes quantum computers learn harder problems.