logoalt Hacker News

linuxftwtoday at 3:00 AM0 repliesview on HN

Based on conversations I've had with some people managing GPU's at scale in the datacenters, inference is an after thought. There is a gold rush for training right now, and that's where these massive clusters are being used.

LLM's are probably a small fraction of the overall GPU compute in use right now. I suspect in the next 5 years we'll have full Hollywood movies being completely generated (at least the specialfx) entirely by AI.