Year of Graduation
VAE-GANs Hybrid with Adversarial Reconstruction Loss
Statistical Learning Theory
We propose a new form of an autoencoding model which incorporates the best properties of variational autoencoders (VAE) and generative adversarial networks (GAN). It is known that GAN can produce very realistic samples while VAE does not suffer from mode collapsing problem. Our model optimizes a Jeffreys divergence between the model distribution and the true data distribution. We show that it takes the best properties of VAE and GAN objectives. It consists of two parts. One of these parts can be optimized by using the standard adversarial training, and the second one is the very objective of the VAE model. An additional drawback of VAE is that we should explicitly define a likelihood of reconstructions, which limits model flexibility especially in high-dimensional spaces and may lead to undesirable effects in generated samples (e.g., blurry images). Therefore, we propose an adversarial scheme for training VAE with implicit likelihood. In an extensive set of experiments on CIFAR-10 and TinyImagent datasets, we show that our model achieves a healthy balance in generation and reconstruction quality compared to other existing baselines.