logoalt Hacker News

Kuinoxlast Thursday at 3:50 PM2 repliesview on HN

I speculate LLMs providers are serving smallers models dynamically to follow usage spikes, and need for computes to train new models. I did observed that models agents are becoming worse over time, especially before a new model is released.


Replies

Workaccount2last Thursday at 10:51 PM

Internally everyone is compute constrained. No one will convince me that the models getting dumb, or especially them getting lazy, isn't because the servers are currently being inundated.

However right now it looks like we will move to training specific hardware and inference specific hardware, which hopefully relives some of that tension.

Cthulhu_last Thursday at 3:52 PM

Probably a big factor, the biggest challenges AI companies have now is value vs cost vs revenue. There will be a big correction and many smaller parties collapsing or being subsumed as investor money dries out.

show 1 reply