:) Oh ye from the paper it looks like if one uses alpha = 2*rank, sometimes LoRA does even better than full finetuning