Let's not gloss over the electrical supply. These chips won't work for free.
LLM inference uses on the order of 1 Wh per query. That's under 10 meters of driving on an EV or running air conditioning for under 5 seconds.
https://hannahritchie.substack.com/p/ai-footprint-august-202...
LLM inference uses on the order of 1 Wh per query. That's under 10 meters of driving on an EV or running air conditioning for under 5 seconds.
https://hannahritchie.substack.com/p/ai-footprint-august-202...