Trivial in the grand scheme of things. There are much larger problems to attend to - if worrying about the cost and impact of AI tokens was a problem, we'd be living in a utopia.
Literally pick any of the top 100 most important problems you could have any impact on, none of them are going to be AI cost/impact related. Some might be "what do we do when jobs are gone" AI related. But this is trivial- you could run the site itself on a raspberry pi.
I'm under the impression LLMs don't generally work that well on an RPI, and I'm guessing that's what the GP is referring to.
I think this is a strange, and honestly worrying, stance.
Just because there are worse problems, doesn't mean we shouldn't care about less-worse problems (this is a logical fallacy, I think it's called relative privation).
Further, there is an extremely limited number of problems that I, personally, can have any impact on. That doesn't mean that problems that I don't have any impact on, are not problems, and I couldn't worry about.
My country is being filled up with data centers. Since the rise of LLMs, the pace at which they are being built has increased tremendously. Everywhere I go, there are these huge, ugly, energy and water devouring behemoths of buildings. If we were using technology only (or primarily) for useful things, we would need maybe 1/10th of the data centers, and my immediate living environment would benefit from it.
Finally, the site could perhaps be run on a Raspberry Pi. But the site itself is not the interesting part, it's the LLMs using it.