the hiding stuff is weird because the whole reason you'd want to see what Claude is doing isn't just curiosity - it's about catching when it goes off the rails before it makes a mess. like when it starts reading through your entire codebase because it misunderstood what you asked for, or when it's about to modify files you didn't want touched. the verbose mode fix is good but honestly this should've been obvious from the start - if you're letting an AI touch your files, you want to know exactly which files. not because you don't trust the tool in theory but because you need to verify it's doing what you actually meant, not what it thinks you meant. abstractions are great until they hide the thing that's about to break your build
My first thought is, for the specific problem you brought up, you find out which files were touched by your version control system, not the AI's logs. I have to do this for myself even without AI.
You look at what Claude’s doing to make sure it doesn’t go off the rails? Personally, I either move on to another ask in parallel or just read my phone. Trying to catch things by manually looking at its output doesn’t seem like a recipe for success.
Yes, this is why I generally still use "ask for permission" prompts.
As tedious as it is a lot of the time ( And I wish there was an in-between "allow this session" not just allow once or "allow all" ), it's invaluable to catch when the model has tried to fix the problem in entirely the wrong project.
Working on a monolithic code-base with several hundred library projects, it's essential that it doesn't start digging in the wrong place.
It's better than it used to be, but the failure mode for going wrong can be extreme, I've come back to 20+ minutes of it going around in circles frustrating itself because of a wrong meaning ascribed to an instruction.
Exactly, and this is the best way to do code review while it's working so that you can steer it better. It's really weird that Anthropic doesn't get this.
I assume it's to make it harder for competitors to train on Claude's Chain-of-Thought.
The other side of catcing going off the rails is when it wants to make edits without it reading the context I know would’ve been neccessary for a high quality change.
> it's about catching when it goes off the rails before it makes a mess
The latest "meta" in AI programming appears to be agent teams (or swarms or clusters or whatever) that are designed to run for long periods of time autonomously.
Through that lens, these changes make more sense. They're not designing UX for a human sitting there watching the agent work. They're designing for horizontally scaling agents that work in uninterrupted stretches where the only thing that matters is the final output, not the steps it took to get there.
That said, I agree with you in the sense that the "going off the rails" problem is very much not solved even on the latest models. It's not clear to me how we can trust a team of AI agents working autonomously to actually build the right thing.