logoalt Hacker News

simoncionyesterday at 7:07 PM0 repliesview on HN

> But that's the thing: the table saw has safeties. Someone put them there.

You noticed that I mentioned that this hypothetical table saw has poorly-designed, entirely inadequate safeties? Things like Opus treating the data it presents to the user as commands that it should execute [0] is definitely [1] a sign of solid, well-designed safety mechanisms.

You might choose to retort "Well, that's because the user isn't running the tool in the mode that makes it wait for confirmation before doing anything of consequence!". In reply, I would point in the general direction of the half-squillion studies indicating that a system whose safety requires an operator to remain vigilant when presented with a large volume of irregularly-presented decision points (nearly all of which can be safely answered with a "Yes, do it.") does not make for a safe system. [2] It -in fact- makes for a system that's designed [3] to be unsafe.

You might also choose to retort "That's never happened to me, or anyone that I know about.". Intermittent failures of built-in safeties that happen under unpredictable circumstances are far, far worse than predictable failures that happen under known ones. I hope you understand why.

[0] <https://old.reddit.com/r/ClaudeCode/comments/1sex28q/opus_46...>

[1] ...not...

[2] I would also -somewhat wryly- note that "An AI Agent that does all of your scutwork, but whose every decision you have to carefully scrutinize, because it will irregularly plan to do something irreversibly destructive to something you care about." is not at all the picture that "AI" boosters paint of these tools.

[3] ...whether intentionally or not...