IB-GAN: Disentangled Representation Learning with Information Bottleneck Generative Adversarial Networks
By: Insu Jeon , Wonkwang Lee , Myeongjang Pyeon and more
Potential Business Impact:
Makes computer images easier to control and understand.
We propose a new GAN-based unsupervised model for disentangled representation learning. The new model is discovered in an attempt to utilize the Information Bottleneck (IB) framework to the optimization of GAN, thereby named IB-GAN. The architecture of IB-GAN is partially similar to that of InfoGAN but has a critical difference; an intermediate layer of the generator is leveraged to constrain the mutual information between the input and the generated output. The intermediate stochastic layer can serve as a learnable latent distribution that is trained with the generator jointly in an end-to-end fashion. As a result, the generator of IB-GAN can harness the latent space in a disentangled and interpretable manner. With the experiments on dSprites and Color-dSprites dataset, we demonstrate that IB-GAN achieves competitive disentanglement scores to those of state-of-the-art \b{eta}-VAEs and outperforms InfoGAN. Moreover, the visual quality and the diversity of samples generated by IB-GAN are often better than those by \b{eta}-VAEs and Info-GAN in terms of FID score on CelebA and 3D Chairs dataset.
Similar Papers
A Generalized Information Bottleneck Theory of Deep Learning
Machine Learning (CS)
Helps computers learn better by understanding feature connections.
Mixture of Balanced Information Bottlenecks for Long-Tailed Visual Recognition
CV and Pattern Recognition
Helps computers recognize many things, even rare ones.
A Generalized Information Bottleneck Theory of Deep Learning
Machine Learning (CS)
Helps computers learn better by understanding feature connections.