logoalt Hacker News

kilpikaarnatoday at 8:24 AM1 replyview on HN

> mjk didn't have anything to add to the social economic conversation around llm usage

There's some semi-apologetic interest in ML, esp. smaller local models, in the "permacomputing" (don't like the term but whatev) sphere. But I don't know if there's much of a conversation around LLMs. With all the hype and how resource intensive and externalities-heavy they are, I can see wanting to draw a line, but it's sad to see it become a purity test.

Lately the discussion around this has had me thinking of the William Köttke quote "not only is it ethical to use the resources of the current system construct the next one, ideally, all the resources of the current system would be used to that end".

I think that if the situation was as dire as it's made out to be (I think it is) and projects like uxn were a serious attempt at a mitigating response (less convinced, as cool as they are), there's room for a conversation about beneficial-detrimental (rather than good-evil). Then we could discuss whether it's a good idea to use LLM-based tools when they are available to help build out infrastructure that runs without them, whether there's a nuance as to at what level of automation we draw the line (Ivan Illich, tools vs machines etc), human augmentation vs replacement, the cognitive load stuff Keeter's post touches on and so on.

Unfortunately, part of the polycrisis seems to be a difficulty in discussing things clearly.

> a bit tone deaf

Agree


Replies

selfhoster11today at 12:19 PM

I refuse to engage in "LLMs are evil, period" views. That's like walking out into a battlefield with a samurai sword, while your enemy has Gatling guns. You'll be shredded. The pressure to survive means new tools have to be examined and incorporated as and when needed. The resources needed to run a 24B LLM on a gaming GPU are not costing the earth.

show 1 reply