logoalt Hacker News

msp26today at 6:11 PM1 replyview on HN

Horrific comparison point. LLM inference is way more expensive locally for single users than running batch inference at scale in a datacenter on actual GPUs/TPUs.


Replies

AlexandrBtoday at 6:15 PM

How is that horrific? It sets an upper bound on the cost, which turns out to be not very high.