The framing of "code's audience is shifting from humans to agents" feels premature. AI agents often don't have the full context needed to make good architectural decisions - they don't understand the client's constraints, the timeline, the maintenance burden, or the tradeoffs that led to a particular design. A human still needs to be the architect, and that means humans still need to be able to read and understand the code.
The real risk isn't that agents can't read messy code - it's that without humans deeply understanding the codebase, you lose the ability to catch when an agent has missed edge cases, taken a shortcut, or produced something that technically passes but doesn't actually solve the right problem. We've all seen agents "cheat" their way through tasks in ways that look correct on the surface.
So the question isn't whether good code matters less now - it arguably matters more. Clean architecture, clear documentation, and well-understood code are what let you verify that an agent did the right thing. And testing remains as useful as it's always been, not because the agent needs it, but because humans need proof that the system actually works. Tests are a spec, a review mechanism, and a safety net all in one.
We're a long way from truly hands-off AI development. Until then, writing good code is how you stay in control.