Representation Learning
Representation learning learns useful feature representations of data, often in an unsupervised or self-supervised way, so that they can be reused for downstream tasks (e.g. classification, clustering).
Examples:
- Autoencoders
- Variational Autoencoders (VAE)
- Contrastive Learning
- Masked modeling (e.g., BERT, MAE)
Relationship with generative models?
Representation learning and generative models often overlap, but they are not the same thing.
- Many generative models naturally learn useful representations as a by-product.