Stable and Convexified Information Bottleneck Optimization via Symbolic Continuation and Entropy-Regularized Trajectories
By: Faruk Alpay
Potential Business Impact:
Makes AI learn better, without breaking.
The Information Bottleneck (IB) method frequently suffers from unstable optimization, characterized by abrupt representation shifts near critical points of the IB trade-off parameter, beta. In this paper, I introduce a novel approach to achieve stable and convex IB optimization through symbolic continuation and entropy-regularized trajectories. I analytically prove convexity and uniqueness of the IB solution path when an entropy regularization term is included, and demonstrate how this stabilizes representation learning across a wide range of \b{eta} values. Additionally, I provide extensive sensitivity analyses around critical points (beta) with statistically robust uncertainty quantification (95% confidence intervals). The open-source implementation, experimental results, and reproducibility framework included in this work offer a clear path for practical deployment and future extension of my proposed method.
Similar Papers
Information Must Flow: Recursive Bootstrapping for Information Bottleneck in Optimal Transport
Machine Learning (Stat)
Helps minds learn and share ideas better.
Intuitive dissection of the Gaussian information bottleneck method with an application to optimal prediction
Molecular Networks
Finds best way to remember important things.
A Generalized Information Bottleneck Theory of Deep Learning
Machine Learning (CS)
Helps computers learn better by understanding feature connections.