logoalt Hacker News

kelseyfrog11/08/20241 replyview on HN

> When you finetune with lora, you're updating maybe 5% of the parameters

I'm not sure I understand this comment. The LoRA paper[1] specifically says that all of the pretrained weights remain frozen.

> keeping the pre-trained weights frozen

Specifically, the LoRA paper differentiates itself from updating some parameters by stating

> Many sought to mitigate this by adapting only some parameters or learning external modules for new tasks.

1. https://arxiv.org/pdf/2106.09685


Replies

viktour1911/08/2024

The effective parameters of the model are the parameters of the original model + lora parameters i.e lora updates only lora parameters, and full finetuning updates only original model parameters.