I think you're absolutely right about the easiest approach. I hope you don't mind me asking for a bit more difficulty.
Wouldn't fine tuning produce better results so long as you don't catastrophically forget? You'd preserve more context window space, too, right? Especially if you wanted it to memorize years of facts?
Are LoRAs a thing with LLMs?
Could you train certain layers of the model?
A good place to start with your journey is this guide from Unsloth:
https://docs.unsloth.ai/get-started/fine-tuning-llms-guide