logoalt Hacker News

davidsainezyesterday at 9:42 PM1 replyview on HN

There are well documented cases of performance degradation: https://www.anthropic.com/engineering/a-postmortem-of-three-....

The real issue is that there is no reliable system currently in place for the end user (other than being willing to burn the cash and run your own benchmarks regularly) to detect changes in performance.

It feels to me like a perfect storm. A combination of high cost of inference, extreme competition, and the statistical nature of LLMs make it very tempting for a provider to tune their infrastructure in order to squeeze more volume from their hardware. I don't mean to imply bad faith actors: things are moving at breakneck speed and people are trying anything that sticks. But the problem persists, people are building on systems that are in constant flux (for better or for worse).


Replies

Wowfunhappyyesterday at 9:51 PM

> There are well documented cases of performance degradation: https://www.anthropic.com/engineering/a-postmortem-of-three-...

There was one well-documented case of performance degradation which arose from a stupid bug, not some secret cost cutting measure.

show 1 reply