Making knowledge illegal is a dangerous precedent. Actions should be illegal, not knowledge. Don't outlaw knowing how to make neurotoxic agents, outlaw actually trying to make them.
As for OpenAI immunity, I'm not sure I see the problem. Consider the converse position: if an OpenAI model helped someone create a cancer cure, would OpenAI see a dime of that money? If they can't benefit proportionally from their tool allowing people to achieve something good, then why should they be liable for their tool allowing people to achieve something bad.
They're positioning their tool as a utility: ultimately neutral, like electricity. That seems eminently reasonable.
The point is valid, but that's typically the way it is. "You can't enjoy the benefit but the detriment is all yours" is how the federal government generally operates.
It's wild that this is being downvoted on HN. Facts should never be illegal or suppressed.
If you disagree you shouldn't downvote, you should refute in a reply.
1. LLMs don't just provide knowledge, they provide recommendations, advice, and instructions.
2. OpenAI very much feels that they should profit from the results of people using their tools. Even in healthcare specifically [0].
[0] https://www.wisdomai.com/insights/TheAIGRID/openai-profit-sh...