AI agents are still just software crashing in a new way.
The user who put the AI agent in motion probably would bear some involvement in responsibility.
FYI, the HN guidelines state, "Please don't use HN primarily for promotion. It's ok to post your own stuff part of the time, but the primary use of the site should be for curiosity." I would encourage you to submit content from others rather than just your own.
Who is liable when your AI sneaks patented or copyrighted code into your codebase?
I won't read the article but I think this is the role that will remain for humans, to be the 'fall guy' when the vibes go wrong. You will have to live in chronic stress, on call so to speak, so when prod goes down, you will take the blame. Maybe that vibe coded PR you didn't and couldn't read contained a serious bug or security lapse, maybe the system design you didn't do but approved contained a RCE you never knew about. Fun times ahead.
A better analogy would be: you hire a robot contractor to do work. Before it arrives, you are asked if the robot should request permission before going into rooms. You say no, and the robot enters the server room.
It does change something for me, despite the meat of your argument still being valid. It clearly is responsible, but so are you.