logoalt Hacker News

Glyptodontoday at 5:57 PM1 replyview on HN

I think there's another route this goes. At $7k a year or more per eng in token use, I think it's very reasonable to buy engineers machines with obscene GPUs and RAM and run models locally. And if it doesn't make sense now, someone will figure it out and save companies $10k+/eng over 3 years.


Replies

charcircuittoday at 6:39 PM

That could leave idle time where GPUs are sitting unused. It would be better to have a shared cluster that many engineers all share. And to avoid a cluster not being saturated other companies queries could also be batched. And oh wait we are back to doing AI inference in the cloud as it is an efficient way to serve AI.