# CS294-158-SP20 Deep Unsupervised Learning Spring 2020
Another great course taught by Pieter Abbeel, some of these ideas are quite useful in research. https://www.youtube.com/watch?v=V9Roouqfu-M&list=PLwRJQ4m4UJjPiJP3691u-qWwPGVKzSlNP&t=776s&ab_channel=PieterAbbeel
Main thing for understanding this is for F1TENTH.
Problems we’d like to solve:
- Generating data: synthesizing images, videos, speech, text
- Compressing data: constructing efficient codes
- Anomaly detection
Likelihood-based models: estimate from samples
Learns a distribution that allows:
- Computing for arbitrary
But if we don’t know what distribution the data follows, can’t we just sample a bunch of data, and generate a Probability Density Function out of it?
The problem is that in higher dimensions this doesn’t work, and even in 1-dimension, this is very prone to overfitting.
Instead, we do function approximation. Instead of storing each probability, we store a parameterized function .
This is where we talk about autoregressive models.
Novel NN architectures carry to other disciplines.