Unsupervised Learning

Autoencoders

https://www.v7labs.com/blog/autoencoders-guide

Video: https://www.youtube.com/watch?v=bIaT2X5Hd5k&ab_channel=DigitalSreeni

An autoencoder is a type of Neural Network used to learn data encodings in an unsupervised manner. Autoencoders consist of 3 parts:

  1. Encoder: Tries to compress the input data.
  2. Bottleneck (contains the “features”): Contains the compressed feature representation. and Most important part of the network.
  3. Decoder: Tries to reconstruct the input data. The output is then compared with a ground truth.

One practical application of autoencoders is that we remove the decoder, and simply use our encoder as as as input to a standard CNN .

Encoder - Decoder, I saw that video from 2-minute papers.

Variational AutoEncoder (VAE)

This is a variant of the autoencoder that is much more powerful, which uses distributions to represent features in its bottleneck. There are issues that arise with Backprop, but they overcome it with a reparametrization trick.