From many years of first hand experience:
- QA is always the first thing companies outsource, with predictable results
- Companies either go the route or “separate QA org with separate management chain” or “have QA engineers report to dev managers”. I’ve seen serious misaligned incentives and toxic outcomes with both
- Frequent Slack messages at 4:15 PM on Friday - “hey they just merged the PR, we really need it tested before Monday stand up”
- QA becomes a de facto dumping ground for glue work that other teams don’t want to do. Senior QA ends up morphing into a de facto “responsibility without authority” project manager role
- There is zero internet “community” around QA the way there is for developers or designers. There is no Slashdot or Hacker News for QA and there never will be. Just a bazaar of book authors and consultants promoting themselves on LinkedIn
IMO the only thing that makes sense anymore is having good SDETs embedded in engineering teams.
This is something I've put a lot of thought into the past couple of years, and a few little soundbites I've come up with during my imaginary shower interviews are:
1. If you don't have Quality Assurance, then you have Quality Uncertainty.
2. QA is a full time job. If you offload the responsibility of QA to the engineers, then you're giving them 2 full time jobs. So unless they're working 16 hours days (even if they are tbh), you aren't assuring quality, you're compromising it.
"Engineers sometimes exhibit an arrogance that they can do everyone else’s job,"
This rings so many bells that it feels like some Buddhist festival. Apply the same approach to QA, Operations, and anything outside the actual product development: when this arrogance was shared between bosses and developers, all good on their side. Now with the AI, the arrogance is staying only on the bosses' side, and we have developers freaking out.
There are two very important ideas in this article, which I fully agree with: QA are not the only people responsible for quality - entire team is. QA act as experts and drivers of quality management process, but they should not and are not acting alone. They should have adversarial approach which is helpful on every stage of SDLC. Thus, few more items from my list why QA is useful in every engineering organization and why every team I hire has at least one QA starting from 4-5 people:
1. Quality management is a continuous process that starts with product discovery and business requirements. Developers often assume that requirements are clear and move on to building the happy path. QA often explore requirements in depth and ask a lot of good questions.
2. QA usually have the best knowledge of the product and help product managers to understand its current behavior, when new requirements suggest to change it.
3. The same applies to product design. Good designer never leaves the team with a few annotated screens, supporting developers until the product is shipped. Design QA - the verification of conformance of implementation to design specs - can be done with QA team, which can assist with automation of design-specific tests.
4. Customer support - QA people are natural partners of customer support organization, with their knowledge of the product, existing bugs and workarounds.
And just a story: on one of my previous jobs recently hired QA engineer spotted number error in an All Hands presentation. That was an immediate buy-in from founders. :)
Feels like AI is flipping this question.
If code generation becomes cheap, then verification becomes the bottleneck.
In that world, QA isn’t going away — it becomes the core engineering function.
Absolutely QA "should" exist. Our QAs are the most knowledgeable people on our product, often informing devs and product alike of requirements, missing requirements, weird configuration outliers, how to actually use the damned app, etc. Without QA we would be developing and testing for brittle requirements to get code into an MVP state, not a functional, user-friendly state.
A big reason companies constantly cycle between "developers should own QA" and "we need a dedicated QA team" is the E2E maintenance cliff.
It's easy for developers to own 50 E2E tests. But from what we see with our users, it's a nightmare when they scale to thousands of tests, and between releases, 10 features might have changed, suddenly causing 300+ tests to fail simultaneously in CI. No developer wants to spend three days debugging and updating the tests, so the test suite rots.
The bottleneck for large-scale E2E testing isn't creation, it's maintenance. We built Stably (https://stably.ai) specifically for this—we have a scalable cloud infrastructure that can process hundreds of concurrent Playwright test failures, understand them by stepping through the failure traces to debug the UI drift, and auto-fix the test scripts in minutes. Developer-led QA only works if you completely remove that massive maintenance tax.
I agreed 100% with beginning but I got lost with the automatic verification engineers. Especially with the part starting with “ You also need to level up your skills in the testing and deployment lifecycle….” How do you know what skills the qa person possesses? This sounds exactly like the things I might hear from developer explaining how I should do my work and what tools I should use.
Test automation is not same coding that developers do. I don’t mean from quality point but they have completely different set of things that matter. Unlike writer here may think while it is important to have tests that pass or fail fast the performance is not one of the most important characteristics. You see tests are run less often than the code and not all tests need to run every time or even every day. One of the most important characteristics of a good test is something developers easily overlook. It is maintainability. When code is written then it’s tested and then it sits in codebase until refactored. Tests don’t get this. Tests work on environment that is under constant change (especially in shift left) that makes maintainability and abstraction layers much more important with test automation code than with developing a feature.
Something to note that some developers do not understand testing at all. They may think it’s all about getting tests passed or getting software broken. It’s not either of those. It’s just checking that software works like expected in different scenarios.
Because of those two things: code maintainability and (mis)understanding testing in deeper level, some developers do great tests and enjoy good qa as resulting product gets better. And then there are developers that struggle with all of that and usually with their developing as well.
So many of these articles talk about why a particular role or type of role within an org should be there or not, but they fail to touch on the 'theory' of why or why not. This article has that same lack of foundation, and so meanders around a bit, IMHO.
Any process in an organization of size will have indicators that measure output. Those indicators should typically be paired with indicators that measure the quality of the output, to ensure product or service levels. That's the theory, and the genesis of 'quality management': whether you're measuring output code or breakfasts [1] or chemicals or widgets or medicine, you need to measure the quality of the output if there are any client specifications or expectations around the output. And there are very few cases where your customers will not have any specs or expectations around your product or service.
How you manage quality follows from those basics; it matters where you measure quality but it is so process dependent - earlier in the process lowers costs, but may not suffice to guarantee final quality - that quality management has to be designed around the specific process; balancing cost with benefit and requirements. How deep or specialized quality management becomes depends on the needs of the org, the size of the org, and the needs of the particular process.
This is why I'm skeptical about whether broad articles like this are beneficial overall. Why and how matter, and where's the foundational discussion behind why and how? Do folks not think at the organizational/business level? Maybe not everyone is a Sheryl Sandberg :-)
"Automated Verification Engineer is experimental"
Except I worked at a company with a QA department made up of entirely "Automated Verification Engineers" ... over a decade ago. And the head of the department had taught at a local QA school (so presumably other QA engineers learned that style of work from her also).
Good QA departments switched to this mode long before AI was even a thing! Maybe 90+% of QA departments didn't work that way pre-AI, but there certainly were ones that did!
The article seems to equate QA with testing, which is a short-sighted view. QA also includes things like standards, and importantly, design and code reviews, which are actually the best way to improve quality.
Yes, QA should exist, and should be managed by Operations.
I've been places where devs have no idea what the product-as a whole-does. They just work on the feature of the sprint and throw the code over the wall. Their testing consists of if: it compiled==it passed. They have no idea how to even start actually testing if it's not on the happy path.
I been in places where the code accomplished the spec, but in the most lazy way possible so it appeared to work but was useless outside of what the tests looked for.
I knew one QA guy that was amazing but was so overloaded because management kept hiring "cheap" QA that were actively making his life worse.
I'm a tech writer right now at a tech company and a dev just sent over an LLM generated "doc" that's referring to things that don't exist.
Neither management nor dev has learned anything from Therac-25. QA is hard.
I worked with an excellent QA once, and that changed my perspective completely as a dev.
A great QA can understand the features of a product quickly, turn those concepts into some sort of grid or matrix in their mind, then pull a bunch of paths and scenarios with estimated priorities and probabilities at a fast and efficient pace, all with great coverage. They can also identify features contradicting each other more quickly than product people.
I think a good QA is capable of being a great vibe coder nowadays, too. If you can write great test suites (write names only), agents nowadays are able to turn those specs into decent codebases. Comparatively, I know a lot of decent dev having not very good taste in testing, who often write overlapping tests or missing important paths.
A good QA engineer is worth many multiples of their weight in gold.
I often find that devs who think otherwise have never had to ship anything against a drop dead date, or definitively prove that a supplier has not met a contractual obligation (after the normal engineers have found it to be their problem in the first place).
I miss my Q/A partner. He was a engineer who gave me feedback on designs and code, helped debug, and I could run the tests he wrote.
It's terrifying without that support. The beancounter level mgt does not Get It that it's cheaper to have this kind of support now than much later in the product cycle.
Is this the real life? Is this just fantasy?
If engineering owns quality, then engineering own all, up the chain. No need for anything and anybody.
Which is the AI pipe dream, really.
You are, in fact, with using AI, QA or coding or otherwise, externalizing services in hope the services will improve and costs will drop.
Let me know how that goes without HITLs.
You need to scale up QA. You need a way for them to share their work with whichever harness you're using. QA will improve the quality of your code base immensely, and they will increase speed of development, and reduce costs.
Fuck off. I was a QA engineer and I helped prevent all kinds of unusual bugs that tests would never find because they required specific workflows to uncover them. Not that I was some genius who knew all the best workflows to test… but when I was randomly poking around and a crash or other unwanted behavior presented, I was methodical about working out the exact steps that led to the repro and I wrote actionable bug reports and collected useable environment data.
Also, I worked tech support for a number of years and I've watched all kinds of unexpected ways that people would interact with their devices. I always did all the weird stuff that most of us wouldn't consider when operating software:
"Why would someone drag every file to the desktop before they opened them?"
"I don't know, but I saw someone who always did that so I tried it."
I work with someone who does great QA work. They know how to rip something apart, they understand the user's non-technical perspective and approach, and they understand what edge cases to look out and they have the actual equipment to test on different physical devices (and so on).
Most importantly, they have the diligence and patience to methodically test subtly different cases, which I frankly don't have.
On the question of whether QA slows things down, I have to ask: slows down what? Slows down releasing something broken? Why is that something to optimize for? We should always be asking how long it takes to release the right thing (indeed I'm most productive when I can close a ticket after concluding nothing is needed).
I built products at Stripe as an engineer and never worked with an explicit QA team. Each team was responsible for the quality to a large extent.
That being said, QA is definitely an important aspect of software development - regardless of who owns this work. Imo, instead of having a QA engineer per team or a few teams, you should have a QA shape role (similar to AVE you mentioned) that oversees a large area like an Org and pushes hard to make sure quality standards are held high across teams.
It's also trivial for engineers these days to have great e2e automated test coverage with AI. We're actually building getlark.ai that helps engineers with this.
> Perhaps the strongest argument nowadays for QA is that with AI, automated verification is a leverage maximizer.
That sure is a sequence of words.
It comes across to me like the emphasized part is arguing against what it's supposed to be arguing for.
Or is the premise that the developers somehow can't run the AI verifier?
I ship a very visible product which, when it breaks, generates a lot of social media angst (it's in the gaming adjacent space). So we try not to break things to the best of our ability. We have very few QA people and have whittled down that team over the past few years (DevOps was eliminated during the first round of layoffs ~2023).
This was painful at first but I do think it's the way to go. We found that too much manual QA incentivizes devs writing features to throw it over the fence - why should they test more if someone else is paid to do it? Devs need to feel the pain of writing tests if their code is hard to test, and they need to be held accountable when their code blows up in production. This feedback loop is valuable in the long run.
Same thing for test automation. Previously we shipped this over to our in-team DevOps people and they built complicated CI/CD setups. Losing them meant we needed to simplify our stack. Took a while and it slowed down feature development, but it was worth it. Of course you need leadership who understands this and dedicates time to building this out.
In defense of DevOps, I think the landscape for automation was poor a few years back. Jenkins and Teamcity are way too complex. Github Actions (for all its warts, and there are many) is much simpler. Our pipelines are also in their own CI/CD (CDK, CodeBuild) - infrastructure as code is the key to scaling.
We still have manual QA people to test things we can't automate. Usually this is for weird surfaces such as smart TVs, or for perceptual tests. I don't see this going away any time soon, but high levels of automation elsewhere drive down the need for "catch-all" manual testing.
QA should exist.
QA should not be forced into an engineering or automation track because the incentives are wrong. You end up with test code becoming the goal and then it usually rots due to most QA not having the experience to create a codebase that scales.
I don't think the industry today understands how to treat QA and I think that leads to a lot of assumptions that it's not useful.
The best QA isn’t just about finding bugs. It’s about bringing quality to the codebase: typing, better static analysis, linters, and useful libraries. In the other direction, it’s also about integrating into the release process by using integrating the what-goes-on / what-stays-in-beta decisions into quality’s approach to giving signal over any other part of the codebase.
Anything that involves gating bits of code, basically, and deciding whether to gate bits of code or not.
Yes, but should be supercharged with AI.
I’d like them to record their steps in details in videos and have AI dissected the steps into text descriptor that’s relevant to the code.
Yes. Without a doubt.
I worked with a QA team for the last fifteen years until last year when they laid them all off.
QA is a discrete skill in and of itself. I have never met a dev truly qualified to do QA. If you don't think this you have never worked with a good QA person. A good QA persons super power is finding weird broken interactions between features and layers where they meet. Things you would never think of in a million years. Any dingbat can test input validation, but it takes a truly talented person to ask "what if I did X in one tab, Y in another, and then Z, all with this exact timing so events overlap". I have been truly stunned at some of the issues QA has found in the past.
As for time, they saved us so much time! Unless your goal is to not test at all and push slop, they are taking so much work off your plate!
Beyond feature testing, when a customer defect would come in they would use their expertise to validate it, reproduce it, document the parameters and boundaries of the issue before it ever got passed on to dev. Now all that work is on us.
I was in testing for 17 years before moving back into Engineering. I have spent my time in Engineering leading teams to push quality left. But I think it's better to say "quality is a system" than to say "engineers own quality." What are you building into your SDLC that makes sure quality happens? Testing is just a part of that, and not even the biggest one.
At a previous job, the team I was on [1] had a dedicated QA engineer (unofficially---he was the only QA engineer that ever worked with our team). Before we got bought, we worked closely with the QA engineer. He had access to the source repo, could compile and test our stuff, and we constantly told him of new features we were working on to give him a heads up. During this time, we were our customer's (yes, we only worked with one customer, who was paying us seven figures per month for service) favorite vendor. Over the 10 years or so of this time, we had like two regressions hit production, and those were found during deployment and we could roll back immediately.
We then got bought out and new management put in. They siloed QA and made it impossible for us to even talk to them about what we were doing. Within a year, we had one deployment fail four times in a row and went from favorite vendor to "utter trash vendor we can't get rid of." Our QA engineer quit, as well as the rest of the team (I was the last to leave). I'm still surprised they still have that customer.
[1] We were the only team having to deal with SS7. It wasn't easy hiring programmers for it, and I think at the highest head count, we had like five members (including the manager when we had one [2], but not including QA, which was "officially" never a part of our team).
[2] If it was tough hiring programmers to deal with SS7, it was even harder to hire managers to deal with programmers dealing with SS7. I think for half the time I was there (over 10 years) I had no official manager and reported to a director or higher in the company.
Taking the shift left thing to it's logical conclusion, you have like everything automated and we risk forgetting we are building for people. At some point someone needs to use the thing with fresh eyes, and validate it is as advertised.
Go ahead, and do without.
What could possibly go wrong?
Should Accounting exist?
Should Legal exist?
Should Facilities exist?
Surely your average employee could own each of these functions.
If you have to ask, the answer is "yes."
> Have engineering own quality
The moment that happens it will either be re-outsourced to QA anyways or quality will become a question of licensing and bonding of professional engineers
I think social media companies don't need that
Enterprise software companies selling definitely need it. Customers ask was this tested? where is the test report?
QA should always exist. The question is just do you want to pay for them. Usually the preferred gaslighting is "without QA devs will do better testing", but it's always about money.
Background: I was a software tester for 6.5 years; currently a software engineer, having worked with dedicated testers for about 5 years.
"QA" should exist regardless of whether you think dedicated software testing staff fit into your org. The whole team is responsible for assuring quality.
Dedicated software testers verify that the solution actually does what it's meant to do, and good software testers become deeply knowledgeable about the product and how features interact. They are ultimately a second pair of eyes, and should have a direct line to product owners or customers.
This can't be automated. The ongoing tests for verifying existing features continue to work without regression can and should be automated (throughout the dev process), adding generative AI to the human verification step is a recipe for disaster.
QA? Or testers?
As someone who believed firmly that QA is a dying/dead profession, after moving over to AI coding over the last 6 months, I think coding is dead and QA and code reviews are what will remain after the aftermath of AI coding. Being able to test the output of AI to make sure it is doing what you want it to do is most of my job now.
I don't see this as a "modern" vs "back in the day" thing.
The real reason many software orgs nowadays don't have QA is for the simple reason that it's slow. Everything in the consumer tech space is about rapid growth, and moving as fast as possible. Nobody cares very much that the software has bugs, what matters is whether it has users.
But outside of consumer tech, QA is a lot more common, since it matters a lot more that the software's logic is correct. (Speaking personally - I used to work for a genetics lab, and we had QA.) There are just different economic incentives involved.
"Before I weigh in further, I’d like to make sure you’re familiar with the testing pyramid."
The testing pyramid is a par excellance SWE kool-aid. Someone wrote a logically-sounding blogpost about it many years ago and then people started regurgitating it without any empirical evidence behind it.
Many of us have realised that you need a "testing hourglass", not a "testing pyramid". Unit tests are universally considered useful, there's not much debate about it (also they're cheap). Integration tests are expensive and, in most cases, have very limited use. UI and API tests are extremely useful because they are testing whether the system behaves as we expect it to behave.
E.g. for a specific system of ours we have ~30k unit tests and ~10k UI/API tests. UI and API tests are effectively the living, valid documentation of how the system behaves. Those tests are what prevent the system becoming 'legacy'. UI and API tests are what enable large-scale refactors without breaking stuff.
Isolated QA should not exist because anything a QA engineer can do manually can be automated.
Of course, anyone would agree that if wishes were fishes, QAs should not exist. We would all use agile with cross-functional teams. Every single team member can do any work that may be needed. All team members can take time off any time they need to because we have full coverage and the world is a beautiful place.
Of course, none of this is true in the real world.
For example, just last week we had a QA essentially bring down our web application on staging environment always reproducible with a sequence of four clicks. Follow the sequence with about the proper timing and boom, exception.
Should this have been caught before a single line of code was written? Yes, it should have been caught before any code was written. However, the reality is that it did not. Should this have been caught by some unit test? Integration test? End to end test? Code review? I'd argue as we barrel down a world of AI slop, we need to slow down more. We need QA more than ever.
should cats drink milk?
Yes
As someone who holds a degree in Medicine besides his degree in Computer Science and works in medical devices, can I just please emphasise that QA should exist in this industry and is well deserved?
Sorry. No 'blue screens' or stack traces in my pacemaker or insulin pump, please.
After AI reviewers, continuous AI QA is a possibility. If anyone is doing this I would like to hear your experience.
Hard to believe people are asking this question in 2026.
Quality is something that takes dedicated focus and lots of work. Therefore it’s a job, not an afterthought or latest priority for someone whose primary focus is not quality.
I've done software dev and manufacturing and engineering- QA is ESSENTIAL EVERYWHERE - ALWAYS.
I would ask anyone that thinks QA is unnecessary to spend a few days on an actual aerospace production line. Software and hardware get EXCESSIVE QA. For good reason.
There's a basic loop that goes on regardless:
1. define a requirement 2. implement the requirement 3. verify that the requirement was implemented
TDD was built around the idea that 1 and 3 could be unified in automated testing, and that's certainly true for a large part of it. But QA as a discrete role needs to exist because, beyond verifying that 2 was done correctly, they expose higher level bugs in 1, the requirements themselves.
It's virtually impossible to define requirements completely and without second order interactions that cause problems. QA is as effective at exposing assumptions and handwaving by the people who created the wireframes or the visual design as by the developers failing to test their own work.
And ideally, this leads to the cycle being virtuous: higher quality starts at the requirements phase, not the implentation phase. It's not just that QA should work closely with the engineers--the engineers need to work closely with UX and VD to ensure they fully understand the requirements. The incentives are aligned among all parties.
In the age of AI of course not, AI is your QA
100% and I’m a software developer and have been for ~30 years. Good QA people know how to find regression and bugs _that you didn’t think about_ which is the whole reason why it shouldn’t be under “engineering” and that it should exist. One of the QA people I work with currently is one of my favorite people. They don’t always make me happy (in the moment) with their bugs or with how they decide to break the software, but in the end it makes a better, more resilient product.