Is this for like military scenarios or like, ChatGPT designed a drug that seemed to work, but people died by the millions 5 years later? Because they should 100% be liable for the latter. The former, good luck trying to prosecute an AI company for something the military does. To an extent, the military would probably want their AI models to be behind their private network, completely firewalled from any public network. SIPRNet iirc. If they lock it down behind a highly classified network, good luck figuring out how they're using AI.
> Because they should 100% be liable for the latter.
I completely agree with you here. I only want to add that in this case, the users (the one(s) who used ChatGPT to design the drug, whichever entity(ies) that is) should also be held liable for their actions.
> Because they should 100% be liable for the latter.
Why? I don't see that a drug designed by ChatGPT should result in any more or less liability than a drug designed by a human?
I think if a human designs a drug and tests it and it all seems fine and the government approves it and then it later turns out to kill loads of people but nobody thought it would... that's just bad luck! You shouldn't face serious liability for that.
Shouldn’t the pharmaceutical company be held liable for insufficiently understanding the drug before releasing it? I don’t think I understand blaming a tool used in the process of designing it and not those who chose to release it.
Why shouldn’t they be liable for military scenarios? Oh right, we don’t value our “enemies” lives, including their civilians.
> Is this for like military scenarios
Probably not. Weapons manufacturers are already well shielded from liability.