Encoder / Decoder (Deep Learning)
In the context of deep learning, an encoder just maps high-dimensional input data into a lower-dimensional, compact representation.
Encoder: Compress input into a latent representation Decoder: Expand latent representation back into meaningful output
You’ll see “Image Encoder” for example in the case of SAM.
Examples by Context
- Autoencoder:
- Encoder → compresses image → latent vector
- Decoder → reconstructs image from latent vector
- Sequence-to-sequence models (like translation):
- Encoder → reads source sentence (e.g. English)
- Decoder → generates target sentence (e.g. French)
- VAE (Variational Autoencoder):
- Decoder takes a sampled latent code and outputs a reconstructed image or signal
- Transformers (e.g. GPT):
- Decoder-only: generates text from a sequence of tokens
- Encoder-decoder (e.g. T5, BART): decoder uses encoder output + its own past outputs to generate sequences