Generative Model

Variational Autoencoder (VAE)

Latent variable model trained with variational inference:

This is a variant of the Autoencoder that is much more powerful, which uses distributions to represent features in its bottleneck. There are issues that arise with Backprop, but they overcome it with a reparametrization trick.

Resources

It's basically an Autoencoder but we add gaussian noise to latent variable z?

Key difference:

  • Regular Autoencoder
    • Input → Encoder → Fixed latent representation → Decoder → Reconstruction.
  • VAE
    • Input → Encoder → Latent distribution → Sample from distribution (adds Gaussian noise via reparameterization trick) → Decoder → Reconstruction

Variational autoencoders provide a principled framework for learning deep latent-variable models and corresponding inference models.

Process

Forward Pass (Encoding → Sampling → Decoding)

  1. Encoder:
    Input data , outputs parameters (mean and variance) of latent distribution :
  1. Reparameterization Trick:
    Differentiably sample latent variable :
  1. Decoder:
    Reconstruct data from sampled latent vector :

Loss Function (Negative ELBO):
Optimize encoder and decoder parameters by minimizing:

Notes from the guide

The VAE can be viewed as two coupled, but independently parameterized models:

  1. encoder (recognition model)
  2. decoder (generative model)

Variants