Mirror Descent Using the Tempesta Generalized Multi-parametric Logarithms
By: Andrzej Cichocki
Potential Business Impact:
Makes computer learning smarter and faster.
In this paper, we develop a wide class Mirror Descent (MD) algorithms, which play a key role in machine learning. For this purpose we formulated the constrained optimization problem, in which we exploits the Bregman divergence with the Tempesta multi-parametric deformation logarithm as a link function. This link function called also mirror function defines the mapping between the primal and dual spaces and is associated with a very-wide (in fact, theoretically infinite) class of generalized trace-form entropies. In order to derive novel MD updates, we estimate generalized exponential function, which closely approximates the inverse of the multi-parametric Tempesta generalized logarithm. The shape and properties of the Tempesta logarithm and its inverse-deformed exponential functions can be tuned by several hyperparameters. By learning these hyperparameters, we can adapt to distribution or geometry of training data, and we can adjust them to achieve desired properties of MD algorithms. The concept of applying multi-parametric logarithms allow us to generate a new wide and flexible family of MD and mirror-less MD updates.
Similar Papers
Mirror Descent and Novel Exponentiated Gradient Algorithms Using Trace-Form Entropies and Deformed Logarithms
Machine Learning (CS)
Teaches computers to learn faster and better.
The Hidden Cost of Approximation in Online Mirror Descent
Machine Learning (CS)
Makes computer learning more accurate with bad data.
Variational Online Mirror Descent for Robust Learning in Schrödinger Bridge
Machine Learning (CS)
Makes AI learn better and faster.