The funny part of “AI will make formal verification go mainstream” is that it skips over the one step the industry still refuses to do: decide what the software is supposed to do in the first place.
We already have a ton of orgs that can’t keep a test suite green or write an honest invariant in a code comment, but somehow we’re going to get them to agree on a precise spec in TLA+/Dafny/Lean and treat it as a blocking artifact? That’s not an AI problem, that’s a culture and incentives problem.
Where AI + “formal stuff” probably does go mainstream is at the boring edges: property-based tests, contracts, refinement types, static analyzers that feel like linters instead of capital‑P “Formal Methods initiatives”. Make it look like another checkbox in CI and devs will adopt it; call it “verification” and half the org immediately files it under “research project we don’t have time for”.
Will include this thread in my https://hackernewsai.com/ newsletter.
"decide what the software is supposed to do in the first place."
After 20 years of software development I think that is because most of the software out there, is the method itself of finding out what it's supposed to do.
The incomplete specs are not lacking feature requirements due to lack of discipline. It's because nobody can even know without trying it out what the software should be.
I mean of course there is a subset of all software that can be specified before hand - but a lot of it is not.
Knuth could be that forward thinking with TeX for example only because he had 500 years of book printing tradition to fall back on to backport the specs to math.
Yeah, the hyper majority of the history of "getting things done" has been: find some guy who can translate "make the crops grow" into a pile of food.
The people who care about the precise details have always been relegated to a tiny minority, even in our modern technological world.
> decide what the software is supposed to do in the first place.
That's where the job security is (and always has been). This has been my answer to "are you afraid for your job because of AI?"
Writing the code is very rarely the hard part. The hard part is getting a spec from the PM, or gathering requirements stakeholders. And then telling them why the spec / their requirements don't make sense or aren't feasible, and figuring out ones that will actually achieve their goals.
OP seems not broadly applicative to corporate software development.
Rather, it's directed at the kind of niche, mission-critical things, that not all of which are getting the formal verification solution that is needed for them and/or that don't get considered due to high costs (due to specialization skill).
I read OP as a realization that the costs have fallen, and thus we should see formal verification more than before.
> it skips over the one step the industry still refuses to do: decide what the software is supposed to do in the first place.
Not only that, but it's been well-established that a significant challenge with formally verified software is to create the right spec -- i.e. one that actually satisfies the intended requirements. A formally verified program can still have bugs, because the spec (which requires specialized skills to read and understand) may not satisfy the intent of the requirements in some way.
So the fundamental issue/bottleneck that emerges is the requirements <=> spec gap, which closing the spec <=> executable gap does nothing to address. Translating people's needs to an empirical, maintainable spec of one type or another will always require skilled humans in the loop, regardless of how easy everything else gets -- at minimum as a responsibility sink, but even more as a skilled technical communicator. I don't think we realize how valuable it is to PMs/executives and especially customers to be understood by a skilled, trustworthy technical person.