logoalt Hacker News

yanosh_kunshtoday at 3:21 PM0 repliesview on HN

So does that mean that LLM inference could go down significantly in price and/or context length would dramatically increase?