> When you finetune with lora, you're updating maybe 5% of the parameters
I'm not sure I understand this comment. The LoRA paper[1] specifically says that all of the pretrained weights remain frozen.
> keeping the pre-trained weights frozen
Specifically, the LoRA paper differentiates itself from updating some parameters by stating
> Many sought to mitigate this by adapting only some parameters or learning external modules for new tasks.
The effective parameters of the model are the parameters of the original model + lora parameters i.e lora updates only lora parameters, and full finetuning updates only original model parameters.