logoalt Hacker News

dryarzegtoday at 2:50 PM0 repliesview on HN

> It might cause minor changes that we don't yet know how to notice, and which only cause symptoms in 20 years' time, for example.

In that case, even if it leads to many deaths, it would be difficult - if not practically impossible - to hold anyone accountable, even if it were possible. However, such a turn of events is difficult, or rather, practically impossible to predict, don’t you think? I apologize for not clarifying this point in my original comment, but I wasn’t referring to delayed effects - I was referring to what becomes evident almost immediately (for example, let’s say “within a year and a half at most”) after the drug is used. Yes… I’m sorry, I just didn’t phrase my thought correctly. I apologize for that.

> ChatGPT is not intended to be a drug manufacturing tool though?

That’s certainly the case right now. However, LLMs like GPT, Claude, Gemini, and others weren’t created for waging war, were they? Then why did Anthropic recently have - let’s just say... "some issues in its relationship" with the DOD, if they were not involved in this, if Claude was not meant to be used in war? Why was the ban on using Gemini to develop weapons removed from its terms of service?

You’re right that LLMs weren’t created for such purposes, and to be honest, I believe that - at least for now - it’s simply unethical to use them for that. These aren’t the kinds of decisions and actions that should be outsourced to a machine that bears no responsibility - moral or legal.

> ChatGPT can give bad advice without even having any bugs. That's just how it works.

To continue my thought, this is precisely why I believe it is unethical to give LLMs any tasks whatsoever that involve human lives. There are simply no safety guarantees - not just "some", but none at all - aside from unreliable safety fine-tuning and prompting tricks. For now, that’s all we can count on.

> If OpenAI were running around claiming "ChatGPT can reliably design drugs, you don't even need to test it, just administer what it comes up with" then sure they should be liable. But that would be an insane thing to claim.

They don't claim it yet. And, as one person (qsera) mentioned below your comment:

> The trick is to make people behave like that without actually claiming it. AI companies seems to have aced it.

They probably won't claim exactly that "ChatGPT can reliably design drugs", just because of the possible consequences. But I'm almost certain there will be something similar in meaning, though legally vague - so that, from a purely legal standpoint, there won't be any grounds for complaint. What's more, they are already making some attempts - albeit relatively small ones so far - in the healthcare sector; for example, "ChatGPT Health"[1]. I don't think they will stop there. That's a business after all.

> if ChatGPT claims that a drug design is safe and effective

I have already said before that the OpenAI will not be the only one who should be held responsible in this case. The (hypothetical) user should also bear some responsibility, and in the scenario you described, the primary responsibility should indeed lie with them. That said, I may be wrong, but it’s possible to fine-tune the model so that it at least warns of the consequences or refuses to claim that "this works 100%". This already exists - models refuse, for example, to provide drug recipes or instructions for assembling something explosive (specifically something explosive, not explosives - I recently asked during testing, out of curiosity, Gemma 4 how to build a hydrogen engine - and the model refused to describe the process because, as it said, hydrogen is highly flammable and the engine itself is explosive), pornography, and things along those lines. Yes, I admit, it’s far from perfect. But at least it works somehow. By the way, if I’m not mistaken, many models even include disclaimers with medical advice, like "it’s best to consult a doctor".

In short, what I’m getting at is that the issue lies in how convincing the LLMs can be at times. If it honestly warns of the dangers of using it, if it says "this doesn’t work" or "this requires thorough testing", and so on, but the user just goes ahead and does it anyway - well, that’s like hitting yourself on the finger with a hammer and then suing the hammer manufacturer. It’s a different story when the model states with complete confidence that "this will definitely work, and there will be no side effects" - and user believes it; there should be some effort put into preventing such cases. But otherwise, yes, I think you’re right about the scenario you described.

And to conclude - I don’t think that when it comes to drug development, we’re talking about ordinary people or individual users. In the context of the parent post, it is implied (though I may have misunderstood) that ChatGPT would be used by entire organizations, such as pharmaceutical companies - just as LLMs in a military context are used not by individuals, but by the DOD and similar organizations. I think this shifts the level of responsibility somewhat. Because when OpenAI enters into a contract for the use of its product, ChatGPT, in the process of drug development and manufacturing, it’s kind of implied that ChatGPT is ready for such use.

[1] https://openai.com/index/introducing-chatgpt-health/

EDIT: I'm sorry that my reply is so long, I'm just trying to express all of my thoughts in one which is probably not a good thing to do. I would write something like a blog post about that, but there's a lot written about this topic already, so... Yeah, and I have also used translator in some parts because English is not my native language.