> "Frontier LLMs can do it with enough context" is not really a strong argument against fine-tuning, because they're expensive to run.
I am not expert in this topic, but I am wondering if large cached context is actually cheap to run and frontier models would be cost efficient too in such setting?
I'd like to read more about that if anyone has any suggestions.