logoalt Hacker News

jstanleytoday at 1:33 PM2 repliesview on HN

> Because they should 100% be liable for the latter.

Why? I don't see that a drug designed by ChatGPT should result in any more or less liability than a drug designed by a human?

I think if a human designs a drug and tests it and it all seems fine and the government approves it and then it later turns out to kill loads of people but nobody thought it would... that's just bad luck! You shouldn't face serious liability for that.


Replies

alpha_squaredtoday at 1:47 PM

If we start from the position of the marketing hype and even Sam Altman's statements, these tools will "solve all of physics". To me it's laughable, but that's also what's driven their outsized valuations. Using the output to drive product decisions and development, it's not hard to imagine a scenario where a resulting product isn't fully vetted because of the constant corporate pressure to "move faster" and the unrealistic hype of "solve all of physics". This is similar to Tesla's situation of selling "Full Self-Driving" but it actually isn't in the way most people would understand that term and so they lost in court on how they market their autonomous driving features.

dryarzegtoday at 1:48 PM

> that's just bad luck

Can't agree with this. No, not at all. That can't be true... That's not "just bad luck". I believe this is actually a serious case of negligence and oversight - regardless of where exactly it occurred, whether on the part of the drug’s manufacturer, the government agency responsible for oversight, or somewhere else. It just doesn’t work that way. Any drug undergoes very thorough and rigorous testing before widespread use (which is implied by "millions of deaths"). Maybe I’m just dumb. And yeah, this isn’t my field. But damn it, I physically can’t imagine how, with proper, responsible testing, such a dangerous "drug" could successfully pass all stages of testing and inspection. With such a high mortality rate (I'll reinforce - millions of deaths cannot be "unseen edge cases"), it simply shouldn’t be possible with a proper approach to testing. Please, correct me if I’m wrong.

> I don't see that a drug designed by ChatGPT should result in any more or less liability than a drug designed by a human?

It’s simple. In this case, ChatGPT acts as a tool in the drug manufacturing process. And this tool can be faulty by design in some cases.

Suppose, during the production of a hypothetical drug at a factory, a malfunction in one of the production machines (please excuse the somewhat imprecise terminology) - caused by a design flaw (i.e., the manufacturer is to blame for the failure; it’s not a matter of improper operation), and because of this malfunction, the drugs are produced incorrectly and lead to deaths, then at least part of the responsibility must fall on the machine manufacturer. Of course, responsibility also lies with those who used it for production - because they should have thoroughly tested it before releasing something so critically important - but, damn it, responsibility in this case also lies with the manufacturer who made such a serious design error.

The same goes for ChatGPT. It’s clear that the user also bears responsibility, but if this “machine” is by design capable of generating a recipe for a deadly poison disguised as a “medicine” - and the recipe is so convincing that it passes government inspections - then its creators must also bear responsibility.

EDIT: I've just remembered... I'm not sure how relevant this is, but I've just remembered the Therac-25 incidents, where some patients were receiving the overdose of radiation due to software faults. Who was to blame - the users (operators) or the manufacturer (AECL)? I'm unsure though how applicable it is to the hypothetical ChatGPT case, because you physically cannot "program" the guardrails in the same way as you could do in the deterministic program.

show 2 replies