Generative Model

Variational Autoencoder (VAE)

Latent variable model trained with variational inference:

This is a variant of the Autoencoder that is much more powerful, which uses distributions to represent features in its bottleneck. There are issues that arise with Backprop, but they overcome it with a reparametrization trick.

Resources

β€œVAE is deeply rooted in the methods of variational bayesian and graphical model.”

  • #todo I don’t fully understand this

It's basically an Autoencoder but we add gaussian noise to latent variable z?

Key difference:

  • Regular Autoencoder
    • Input β†’ Encoder β†’ Fixed latent representation β†’ Decoder β†’ Reconstruction.
  • VAE
    • Input β†’ Encoder β†’ Latent distribution β†’ Sample from distribution (adds Gaussian noise via reparameterization trick) β†’ Decoder β†’ Reconstruction

Process

Forward Pass (Encoding β†’ Sampling β†’ Decoding)

  1. Encoder:
    Input data , outputs parameters (mean and variance) of latent distribution :
  1. Reparameterization Trick:
    Differentiably sample latent variable :
  1. Decoder:
    Reconstruct data from sampled latent vector :

Loss Function (Negative ELBO):
Optimize encoder and decoder parameters by minimizing: