logoalt Hacker News

Tostino11/09/20241 replyview on HN

Thanks for the TLDR. Yeah, pretty much fits my experience, though I mainly cared about the specific task performance I was training rather than caring about regressing unrelated tasks.


Replies

danielhanchen11/09/2024

:) Oh ye from the paper it looks like if one uses alpha = 2*rank, sometimes LoRA does even better than full finetuning