People are very good at ignoring warnings, I see it all the time.
There's no way to design it to minimise misinformation, the "ground truth" problem of LLM alignment is still unsolved.
The only system we currently have to allow people to verify they know what they are doing is through licencing: you go to training, you are tested that you understand the training, and you are allowed to do the dangerous thing. Are you ok with needing this to be able to access a potentially dangerous tool for the untrained?
There is no way to stop this at this point. Local and/or open models are capable enough that there is just a short window before attempts at restricting this kind of thing will just lead to a proliferation of services outside the reach of whichever jurisdiction decides to regulate this.
If you want working regulation for this, it will need to focus on warnings and damage mitigation, not denying access.