Variational Autoencoder (VAE)
Latent variable model trained with variational inference:
This is a variant of the Autoencoder that is much more powerful, which uses distributions to represent features in its bottleneck. There are issues that arise with Backprop, but they overcome it with a reparametrization trick.
Resources
βVAE is deeply rooted in the methods of variational bayesian and graphical model.β
- #todo I donβt fully understand this
It's basically an Autoencoder but we add gaussian noise to latent variable
z
?Key difference:
- Regular Autoencoder
- Input β Encoder β Fixed latent representation β Decoder β Reconstruction.
- VAE
- Input β Encoder β Latent distribution β Sample from distribution (adds Gaussian noise via reparameterization trick) β Decoder β Reconstruction
Process
Forward Pass (Encoding β Sampling β Decoding)
- Encoder:
Input data , outputs parameters (mean and variance) of latent distribution :
- Reparameterization Trick:
Differentiably sample latent variable :
- Decoder:
Reconstruct data from sampled latent vector :
Loss Function (Negative ELBO):
Optimize encoder and decoder parameters by minimizing: