There is no such thing as agentic codebase. If humans don’t understand it, nothing really does. Agents give zero fuck about anything. If they burn 100 or million tokens to add a feature, they don’t care. It’s the developers responsibility to keep it under control.
The big AI projects I've seen at work are...
- A Kafka topic visualization dashboard
and
- A chrome extension the original "developer" can no longer work on cause the bots will wreck something else on every new feature he tries to add or bug he tries to fix
I think we're a ways out from truly complex code bases that only agents understand.
I've seen a bunch of hype video where people spend lord knows how much money in order to have a bunch of these things run around and I guess... use Facebook, and make reports to distribute amongst themselves, and then the human comes in and spends all their time tweaking this system. And then apparently one day it's going to produce _something_ but two years and counting and much like bitcoin, I've yet to see much of this _something_ materialize in the form of actual, working, quality software that I want to use.
My buddy made a thing that tells him how many people are at the gym by scraping their API and pushing it into a small app package... I guess that's kind of nice.
100% this. With these new tools it's tempting to one-shot massive changesets crossing multiple concerns in preexisting, stable codebases.
The key is to keep any changes to code small enough to fit in your own "context window." Exceed that at your own risk. Constantly exceeding your capacity for understanding the changes being made leads to either burnout or indifference to the fires you're inevitably starting.
Be proactive with these tools w.r.t. risk mitigation, not reactive. Don't yolo out unverified shit at scales beyond basic human comprehension limits. Sure, you can now randomly generate entirely (unverified) new software into being, but 95% of the time that's a really, really bad idea. It is just gambling and likely some part of our lizard brains finds it enticing, but in order to prevent the slopification of everything, we need to apply some basic fucking discipline.
As you point out, it's our responsibility as human engineers to manage the risk reward tradeoffs with the output of these new tools. Anecdotally, I can tell you, we're doing a fucking bad job of it rn.