logoalt Hacker News

camdenreslinkyesterday at 8:21 PM0 repliesview on HN

It could be. Or just smarter caching (which wouldn't necessarily have to do with model intelligence). Or just overfitting on the 95% most common prompts (which could save tokens but make the models less intelligent/flexible).