Hellinger loss function for Generative Adversarial Networks
By: Giovanni Saraceno, Anand N. Vidyashankar, Claudio Agostinelli
We propose Hellinger-type loss functions for training Generative Adversarial Networks (GANs), motivated by the boundedness, symmetry, and robustness properties of the Hellinger distance. We define an adversarial objective based on this divergence and study its statistical properties within a general parametric framework. We establish the existence, uniqueness, consistency, and joint asymptotic normality of the estimators obtained from the adversarial training procedure. In particular, we analyze the joint estimation of both generator and discriminator parameters, offering a comprehensive asymptotic characterization of the resulting estimators. We introduce two implementations of the Hellinger-type loss and we evaluate their empirical behavior in comparison with the classic (Maximum Likelihood-type) GAN loss. Through a controlled simulation study, we demonstrate that both proposed losses yield improved estimation accuracy and robustness under increasing levels of data contamination.
Similar Papers
On Robust hypothesis testing with respect to Hellinger distance
Statistics Theory
Makes computer tests work even with wrong guesses.
Logical GANs: Adversarial Learning through Ehrenfeucht Fraisse Games
Machine Learning (CS)
Makes AI create better, understandable pictures.
On optimal solutions of classical and sliced Wasserstein GANs with non-Gaussian data
Machine Learning (CS)
Improves AI's ability to learn from data.