The irony is that we all independently decided QA was a “process smell” around the same time. The logic seemed airtight: developers should own quality, shift left, test in prod with feature flags, move fast. Every tech blog and conference talk said the same thing. What nobody mentioned is that QA teams weren’t just finding bugs—they were the institutional memory of how things break.
When you dissolve QA and tell developers “you own quality now,” that knowledge just evaporates. Each developer tests the happy path for their feature and calls it done. The edge cases? The interaction effects? The weird state machines? Those all ship to prod. The really insidious part is the metrics looked great. Velocity up, deployment frequency up, cycle time down. We were measuring output, not outcomes. Exec dashboards showed green across the board while user experience quietly degraded.
Now we’re in the equilibrium state: software ships fast and breaks often, every deploy is a dice roll, and we’ve normalized “hotfix Friday” as just how things work. The velocity gains were real, but we were measuring distance traveled, not value delivered. Turns out “everyone owns quality” means nobody owns quality. Who knew.
During my army days, the sergeant major always seem to know where we would fail to clean during inspection standbys, eg the top rim of doors. Part of it is a hazing ritual, but it also means if you know where to look, you know where people will consistently fail. As an SRE who previously had to manually inspect changes and releases, I quickly learn what to check for, and saved many production issues from happening, but I guess nobody will know about the failures that didnt happen, but they will notice the delay I introduced and the inspection process was automated together with the CD system and I am cut out. Fingers crossed the automation is as thorough or can learn common failure modes.
There was a trend of devs being their own QA.
Also devs being infra (devops).
Also devs being PMs (product developers).
Also devs being managers (flat orgs).
Also devs being facilitators (rotative scrum masters).
I wonder why expertise is being lost.
We didn't decide it was a process smell; management saw a lot of expensive mouths to pay, and decided they could get away with cutting those people by making up some nonsense and then making the remaining staff do more work for the same pay. This happens over and over and over again, and every time, people fall for it.
Bring back specialization. Bring back paying experts for their expertise. Bring back one person having one job.
It's a case of high trust, high skill structures not being maintained while trying to introduce outsourcing and minimum viable skill lower cost employees.
The idea of owning your own quality only works if you can trust the dev to understand quality, and want to implement it. Independent almost adversarial QA is required when you can't trust the devs.
I think the move to more test-driven development has made everyone a little bit overconfident. I've seen pull requests merged that pass all tests, but still end up bug-ridden. There's just no substitute for human eyes looking over the changes.
It’s possible for a woodworker to build a table which reliably does not collapse, even without having a third party test it.
It’s equally possible for a different woodworker to build a table which will collapse when deployed in a customer’s dining room.
The difference comes down to which woodworker I’ve hired, and how they’ve been trained.
If you can’t trust a woodworker to ship a table that stands under its own weight, layering on third-party QA isn’t really going to fix the underlying problem.
That said, cargo culting the “no QA” model is ill-advised. If a particular dev shop needs QA today, they’ll probably need it tomorrow.
The Devs that 'know how to pass a good QA' are rare. I was lucky in that my first shop involved 'Make it easy enough for a person who thinks Internet Explorer might mean the AOL Icon to them' and my second shop involved a glorious QA that used to do QA for USAF [0]
If anything I'd argue that the 'Shift of QA into Dev' was a first step to the role consolidation and job enshittification we see today.
[0] - I still recall the time where I had a 'bad' bug and he told me "look, nobody died". It was a good benchmark set for understanding "I need to know how dangerous this -could- be."
It’s interesting because Apple actually has a ton of QA people, and they do their job more than well enough. Any bug you file is nearly guaranteed to be a known issue in someone’s backlog or another.
But Apple ships on a schedule. A project’s code is either on the train when it departs, or it’s not. Promo packets depend on shipping, so you take the bugs, and you assign them to next release.
Bugs don’t stop releases, features just occasionally get punted. For every public feature you saw at WWDC that gets delayed because it’s not ready yet, probably 3-4 things shipped with known bugs that just weren’t important enough to punt the feature, so they just ship with the bugs.
QA is not the problem at Apple, because they know about the bugs. The culture of “we ship in September no matter what, nothing holds up the release” is the cause.