I agree that LLMs could be more open about their dangers and that people are bad at judging risks sometimes.
Still I think a band saw has very little warning on it and by it's design there is very little anyone can do about me cutting off my finger if I am not careful.
LLM companies can do very little about the unpredictability of LLMs. So we have to choose how for we will let it go. In the end the LLM only produces texts. We are in control what tools we give it. The more tools the more useful and also the more dangerous.
And maybe it's all worth it. Maybe the LLM deletes the database only sometimes but between that we make a lot of money. I don't think my employer would enjoy that so I will be more conservative.
It’s possible to make AI safe, but that also throws most of the gains out of the windows, especially if the artifact is a diff which can take time to review. In IT, you often have to give access to possible malicious users, you just have to scope what they can do.
But the push is agentic everything, where AI needs to be everywhere, not in its own sandbox.
A band saw is always a screaming band of bladed death. An LLM is sometimes a buddy, sometimes a mentor, and only sometimes a guy that drops your database.
> Still I think a band saw has very little warning on it and by it's design there is very little anyone can do about me cutting off my finger
Most saws have a blade guard of some sort to prevent the blade from being over-exposed. They are also COVERED in warning signs and symbols, as well as having other safety features like emergency stop buttons/pedals.
There has definitely been a maximal amount of effort taken to warn and keep people safe from saws. LLMs, conversely, have been shoved into everything with very little forethought or testing to make sure they are safe and perform the task correctly.