logoalt Hacker News

ben_wyesterday at 2:53 PM1 replyview on HN

"Worst" outcome assumes it's easy to give an ordering.

Which is worse, (1) accidentally blowing yourself up with home-made nitroglycerin/poisoning yourself because your home-made fume hood was grossly insufficient, or (2) accidentally making a novel long-lived compound which will give 20 people slow-growing cancers that will on average lower their life expectancy by 2 years each?

What if it's a small dose of a mercury compound (or methyl alcohol) at a dose which causes a small degree of mental impairment in a large number of people?

If you're actually trying to cause harm, then your "worst" case scenario is diametrically opposed to everyone else's worst case scenario, because for you the "worst" case is that it does nothing at great expense.

Right now, I expect LLM failures to be more of the "does nothing or kills user" kind; given what I see from NileRed, even if you know what you're doing, chemistry can be hard to get right.


Replies

himata4113yesterday at 3:51 PM

As someone who also watches NileRed of course it is hard, but AI can give you solutions that normally you wouldn't be able to come up with due to lack of knowledge or/and education.

And to clarify, by 'worst case' I meant that you're already trying to create a deadly compound, worst that can happen is you kill yourself which was already an accepted risk by the user.