🛠️ Steven Gong

Search

SearchSearch

May 12, 2025, 1 min read

LoRA: Low-Rank Adaptation of Large Language Models

https://arxiv.org/abs/2106.09685

LoRA is a technique used to fine-tune LLMs in a more efficient and memory-friendly way.

Low-Rank Decomposition: Instead of updating all parameters of a pre-trained model, LoRA introduces a small set of trainable low-rank matrices that adapt the weights of specific layers.

https://www.reddit.com/r/MachineLearning/comments/13m78u6/d_an_eli5_explanation_for_lora_lowrank_adaptation/

Graph View

Backlinks

  • Low-Rank Approximation

Created with Quartz, © 2025

  • Blog
  • LinkedIn
  • Twitter
  • GitHub