logoalt Hacker News

Art9681last Tuesday at 2:31 AM0 repliesview on HN

RoPE scaling is not an ideal solution since all LLMs in general start degrading at around 8k. You also have the problem of cost by yolo'ing long context per task turn even if the LLM were capable of crunching 1M tokens. If you self host then you have the problem of prompt processing time. So it doesnt matter in the end if the problem is solved and we can invoke n number of tools per task turn. It will be a quick way to become poor as long as providers are charging per token. The only viable solution is to use a smart router so only the relevant tools and their descriptions are appended to the context per task turn.