Whoa, that’s a bit far. I’m a former pentester. I meaningfully improved security at quite a few places. The standout was Citadel, where a product was set to launch within a few weeks. When I first got there, typing ‘ into their search fields resulted in SQL injection right away. They had never thought to defend against it. Over the next week, I fed them a steady list of bugs and vulns (there were many) until by the end of it that product was watertight. I was particularly proud of that one.
Pentests work.
Pentests work to secure the product under test at the point in time of the test (if the company cares to fix the bugs...). The real solution is to design in security throughout the software lifecycle, not play pentest wack-a-mole game at the end of the cycle. If a pentester is finding trivial SQL injection in an app, then it is clear that the company never considered security. And unless the pentest makes them care, the cycle will just continue.
You do realize you're actually supporting the point that you are replying to. No amount of pentests, no amount of security products are going to solve the problem that a product was built that had a search field that was trivially injectable.
"Improved" is a useless word. Is their security now adequate? Is it secure against the run-of-the-mill financially motivated threat actors we see regularly and orchestrating thousands of profitable attacks annually?
We regularly see attacks extorting tens of millions of dollars from major multinationals like Citadel. Is the cost of breaching their systems in excess of ten million dollars (which would net you a nice fat profit against multiple tens of millions extorted)? You get a team of 10 professionals for 1-3 years and you can not breach their systems?
That is the minimum standard of adequate against commonplace, prevailing threats for large multinationals. Even that ignores the fact that major corporations are frequently attacked by state actors, so really the minimum standard for protection against expected threats should include those as well, but I will leave that aside for now since the overwhelming sentiment is that protection against state actors is so utterly hopeless it is not even worth mentioning.
For that matter, can you point to literally any system in the entire world that is positively demonstrated (absence of evidence is not evidence of absence) to have reached that standard?
In my experience pentests were just a box ticking exercise. I consider it a cultural thing. If you're having to run a pentest right before release and it uncovers a vast amount of issues then you never cared about the quality of your software to begin with and it would show up not just as insecure software. Running automated test suites periodically should be a part of software building practices. That and deep code reviews and so on. All of that to feed into the quality of what you're building.
The problem is getting the decision makers to care. And/or changing the process to at least consider quality as an important factor even if velocity is preferred(and featuritis has taken over).
Story time. In one gig I had, a couple of weeks into it I discovered that AWS keys to the production data in the S3 buckets were being exposed on the client side(an SPA). Those keys would give you access to the data for all the clients on that platform. So I figured I'd do "the right thing" and told my manager(the CTO) who said something along the lines of "yeah that sounds serious" and asked me to talk to the CEO who wrote that code. At this point, I was still expecting that I might be wrong or at least being told that it was written in a rush or something and thank me for pointing it out. The CEO just dismissed it as being "temporary production keys" and closed down the conversation. Suffice it to say that I was not the CEO's favorite person moving forward.