This agent stuff is really making me lose respect for our industry
All the years of discussing programming/security best practices
Then cut to 2026 and suddenly its like we just collectively decided software quality doesn't matter, determinism is going out the window, and its becoming standard practice to have bots on our local PC constantly running unknown shell commands
Our industry has never been serious about security. We all download and run unvetted code via package managers every day. At least now the insanity is out in the open. We won't change until Skynet fires off the nukes.
Agents are providing to employees the long overdue benefits limited liability companies long enjoyed: Gambling with upside for themselves and other peoples downsides.
I’ve never had respect for the industry as a whole, only individuals within. There has a been a serious lack of rigor and professionalism in software engineering for as long as I’ve been a part of it
I think it might be because we (or at least I) used to associate insecure actions with people, not computers. Computers should know better, right? Recently, I spotted that Opus 4.6 found config files for one of its tools and gave itself access to my whole filesystem. Similarly, Gemini CLI will rewrite itself if you let it.
It’s a nightmare… the problem is it’s far too easy for people to set these agents up - without understanding the security implications.
We’ve covered so many issues already on our blog (grith.ai)
The number of wasted hours spent talking about code quality and patterns has to be astronomical.
> cut to 2026 and suddenly its like we just collectively decided software quality doesn't matter
I saw the sea change in 2008 when quality process got replaced with velocity and testing tasks. I've watched everything from Experian and health record data leaks to Windows 11 since that change. Software quality hasn't mattered for a long time.
The media isn’t helping. This wasn’t a “rogue AI”. It was a system that was given permission by a human operator.
We don’t say “a rogue plane killed 300 people today when it crashed into a mountain”.
The only difference in the AI case is that some people are attempting to shift blame for their incompetence into a computer system, and the media is going along with it because it increases clicks.
People salivate so hard at the thought of the high level of automation promised that they're willing to do away with privacy altogether and live in Data Communism.
My thinking is, this will increase the demand for backup and other resilience solutions.
> Then cut to 2026 and suddenly its like we just collectively decided software quality doesn't matter
Is this new to people? I figured this out when I first entered the industry. The messages have never been particularly subtle.
How can you respect an industry that doesn't respect itself?
Turns out all of the frenzy of the ZIRP era is piddling compared to what happens when ZIRP is taken away.
I think it's batshit crazy. That's why I wrote yoloAI, so I could sandbox it up properly and control EXACTLY what comes out of that sandbox, diff style.
https://github.com/kstenerud/yoloai
I can't go back anymore. Going back to a non-sandboxed Claude feels like going back to a non-adblocked browser.
The whole agent ecosystem is a ridiculous shitshow. All of this because you need to ASAP find something believable to sell your overinflated, bullshit machine to the masses. Otherwise the bubble will burst.
We didn't collectively decided, we've got this forced down our throats to apply a novel tool to any imaginable situation because the execs got antsy about being left behind.
A truly absurd amount of capital was deployed which triggered a cascade of reactions by the people in charge of capital at other places. They are extremely anxious that everything will change under their feet, and if they don't start using as much as humanly possible of it right about now they die.
That's it.
The tools have definitely found some use, there's more to learn on how else they can be used, and maybe over time smart people will settle on ways to wrangle it well. The messaging from the execs though, is not that, it is "you'll be measured on how much you use this, we don't know for what or how, it's for you to figure out but don't dare to not use it".
I do understand their anxiety, their job is to not let their companies die, and make the most money as they can in the process; a seemingly major shift on the foundations of their orgs will cause fear.
But we have not collectively decided that it was safe, and good, to run rampant with these tools without caring for all that was learnt since software was invented...