logoalt Hacker News

SpicyLemonZestyesterday at 3:05 PM1 replyview on HN

The actual contents of this article are making reasonable arguments I largely agree with. It would be very surprising for LLM-based AI systems to act as monomanaical goal optimizers, since they're trained on human text and humans are extremely bad at goal-oriented behavior. (My goals for today include a number of work and self-maintenance tasks, and the time I'm spending here writing out a HN comment does not all help achieve them - I suspect most people reading this comment are in the same boat.)

It's very frustrating that the magazine wrote such a dumb headline which guarantees people won't talk about the issues the article raised. Obviously non-goal-oriented systems can still have important negative effects.


Replies

mitthrowaway2today at 4:37 AM

I feel like if it took thirty or forty more years of research before a more advanced, more goal-oriented AI became capable of destroying humanity for paperclips, that would still be a bad outcome. The road ahead is long and we're moving too fast to only consider what's in the rear view mirror.