logoalt Hacker News

tarsingetoday at 9:38 AM1 replyview on HN

And is vibing replies to comments too in the Reddit thread. When commenters points out they shouldn’t run in YOLO/Turbo mode and review commands before executing the poster replies they didn’t know they had to be careful with AI.

Maybe AI providers should give more warnings and don’t falsely advertise capabilities and safety of their model, but it should be pretty common knowledge at this point that despite marketing claims the models are far from being able to be autonomous and need heavy guidance and review in their usage.


Replies

fragmedetoday at 9:53 AM

In Claude Code, the option is called "--dangerously-skip-permissions", in Codex, it's "--dangerously-bypass-approvals-and-sandbox". Google would do better to put a bigger warning label on it, but it's not a complete unknown to the industry.