Score: 0

Information Gradient for Directed Acyclic Graphs: A Score-based Framework for End-to-End Mutual Information Maximization

Published: January 5, 2026 | arXiv ID: 2601.01789v1

By: Tadashi Wadayama

Potential Business Impact:

Helps computers learn to send and get information better.

Business Areas:
Semantic Search Internet Services

This paper presents a general framework for end-to-end mutual information maximization in communication and sensing systems represented by stochastic directed acyclic graphs (DAGs). We derive a unified formula for the (mutual) information gradient with respect to arbitrary internal parameters, utilizing marginal and conditional score functions. We demonstrate that this gradient can be efficiently computed using vector-Jacobian products (VJP) within standard automatic differentiation frameworks, enabling the optimization of complex networks under global resource constraints. Numerical experiments on both linear multipath DAGs and nonlinear channels validate the proposed framework; the results confirm that the estimator, utilizing score functions learned via denoising score matching, accurately reproduces ground-truth gradients and successfully maximizes end-to-end mutual information. Beyond maximization, we extend our score-based framework to a novel unsupervised paradigm: digital twin calibration via Fisher divergence minimization.

Country of Origin
🇯🇵 Japan

Page Count
16 pages

Category
Computer Science:
Information Theory