> Because the "safest" AI is one that doesn't do anything at all.
We didn't say 'perfectly safe' or use the word 'safest'; that's a strawperson and then a disingenous argument: Nothing is perfectly safe, yet safety is essential in all aspects of life, especially technology (though not a problem with many technologies). It's a cheap way to try to escape responsibility.
> In most cases, failing to be helpful is costly
What an disingenuous, egocentric approach. Claude and other LLMs aren't that essential; people have other options. Everyone has the same obligation to not harm others. Drug manufacturers can't say, 'well our tainted drugs are better than none at all!'.
Why are you so driven to allow Anthropic to escape responsibility? What do you gain? And who will hold them responsible if not you and me?
I like Anthropic and I like Claude's tuning the most out of any major LLM. Beats the "safety-pilled" ChatGPT by a long shot.
>Why are you so driven to allow Anthropic to escape responsibility? What do you gain? And who will hold them responsible if not you and me?
Tone down the drama, queen. I'm not about to tilt at Anthropic for recognizing that the optimal amount of unsafe behavior is not zero.