logoalt Hacker News

andriy_kovalyesterday at 9:03 PM1 replyview on HN

> "Frontier LLMs can do it with enough context" is not really a strong argument against fine-tuning, because they're expensive to run.

I am not expert in this topic, but I am wondering if large cached context is actually cheap to run and frontier models would be cost efficient too in such setting?


Replies

prettyblockstoday at 1:55 AM

I'd like to read more about that if anyone has any suggestions.