logoalt Hacker News

themafiatoday at 6:21 PM1 replyview on HN

> They also build secondary LLMs which double-check that the core LLM is not telling people how to build pipe bombs

Such a fear mongering position. You can learn to build pipe bombs already. Take any chemical reaction that produces gas and heat and contain it. Congratulations, you have a pipe bomb.

Meanwhile.. just.. ask an LLM if you can mix certain cleaning chemicals safely.

> I see four moats that could prevent this from happening.

Really? Because you just said:

> human brains, which are biologically predisposed to acquire prosocial behavior

You think you're going to constrain _human_ behavior by twiddling with the language models? This is foolishly naive to an extreme.

If you put basic and well understood human considerations before corporate ones then reality is far easier to predict.


Replies

bigfishrunningtoday at 8:02 PM

> Meanwhile.. just.. ask an LLM if you can mix certain cleaning chemicals safely.

the cost of the wrong answer to this question is so incredibly high that I hope nobody is sincerely asking an LLM for this information. The things people trust to "machine that gives convincing answers that are correct 90% of the time" continue to shock me

show 1 reply