# CS294-158-SP20 Deep Unsupervised Learning Spring 2020

Another great course taught by Pieter Abbeel, some of these ideas are quite useful in research. https://www.youtube.com/watch?v=V9Roouqfu-M&list=PLwRJQ4m4UJjPiJP3691u-qWwPGVKzSlNP&t=776s&ab_channel=PieterAbbeel

Main thing for understanding this is for F1TENTH.

Lecture 1

Lecture 2

Lecture 3

Likelihood-based models

Problems we’d like to solve:

  • Generating data: synthesizing images, videos, speech, text
  • Compressing data: constructing efficient codes
  • Anomaly detection

Likelihood-based models: estimate from samples

Learns a distribution that allows:

  • Computing for arbitrary
  • Sampling

There is a parallel with STAT206, where we learn MLE.

I’ve always wondered, when we’ve been learning to do MLE, all we are really doing is taking simple estimates, such as the sampled mean to estimate for example for a Normal Distribution.

But if we don’t know what distribution the data follows, can’t we just sample a bunch of data, and generate a Probability Density Function out of it?

The problem is that in higher dimensions this doesn’t work, and even in 1-dimension, this is very prone to overfitting.

Instead, we do function approximation. Instead of storing each probability, we store a parameterized function .

This is where we talk about autoregressive models.

Novel NN architectures carry to other disciplines.