So I have been doing formal specification with TLA+ using AI assistance and it has been very helpful AFTER I REALIZED that quite often it was proving things that were either trivial or irrelevant to the problem at hand (and not the problem itself), but difficult to detect at a high level.
I realize formal verification with lean is a slightly different game but if anyone here has any insight, I tend to be extremely nervous about a confidently presented AI "proof" because I am sure that the proof is proving whatever it is proving, but it's still very hard for me to be confident that it is proving what I need it to prove.
Before the dog piling starts, I'm talking specifically about distributed systems scenarios where it is just not possible for a human to think through all the combinatorics of the liveness and safety properties without proof assistance.
I'm open to being wrong on this, but I think the skill of writing a proof and understanding the proof is different than being sure it actually proves for all the guarantees you have in mind.
I feel like closing this gap is make it or break it for using AI augmented proof assistance.
Interesting. It's essentially the same idea as in this article: https://substack.com/home/post/p-184486153. In both scenarios, the human is relieved of the burden of writing complex formal syntax (whether Event-B or Lean 4). The human specifies intent and constraints in natural language, while the LLM handles the work of formalization and satisfying the proof engine.
But Lean 4 is significantly more rigid, granular, and foundational than e.g. Event-B, and they handle concepts like undefined areas and contradictions very differently. While both are "formal methods," they were built by different communities for different purposes: Lean is a pure mathematician's tool, while Event-B is a systems engineer's tool. Event-B is much more flexible, allowing an engineer (or the LLM) to sketch the vague, undefined contours of a system and gradually tighten the logical constraints through refinement.
LLMs are inherently statistical interpolators. They operate beautifully in an Open World (where missing information is just "unknown" and can be guessed or left vague) and they use Non-Monotonic Reasoning (where new information can invalidate previous conclusions). Lean 4 operates strictly on the Closed World Assumption (CWA) and is brutally Monotonic. This is why using Lean to model things humans care about (business logic, user interfaces, physical environments, dynamic regulations) quickly hits a dead end. The physical world is full of exceptions, missing data, and contradictions. Lean 4 is essentially a return to the rigid, brittle approach of the 1980s expert systems. Event-B (or similar methods) provides the logical guardrails, but critically, it tolerates under-specification. It doesn't force the LLM to solve the Frame Problem or explicitly define the whole universe. It just checks the specific boundaries the human cares about.
I just completed the formal verification of my bachelor thesis about real time cellular automata with Lean 4, with heavy use of AI.
Over the past year, I went from fully manual mode (occasionally asking chat gpt some Lean questions) to fully automatic mode, where I barely do Lean proofs myself now (and just point AI to the original .tex files, in German). It is hard to believe how much the models and agentic harnesses improved over the last year.
I cannot describe how much fun it is to do refactorings with AI on a verified Lean project!
Also, it's so easy now to have visualizations and typesetted documents generated by AI, from dependency visualizations of proofs using the Lean reflection API, to visual execution traces of cellular automatas.
Lean is a great idea, especially the 4th version, a huge level up from the 3rd one, but its core still deficient[1] in some particular scenarious (see an interesting discussion[2] in the Rock (formerly Coq) issue tracker). Not sure if it might hinder the automation with the AI.
I am using lean as part of the prd.md description handed to a coding agent. The definitions in lean compile and mean exactly what I want them to say. The implementation i want to build is in rust.
HOWEVER … I hit something i now call a McLuhen vortex error: “When a tool, language, or abstraction smuggles in an implied purpose at odds with your intended goal.”
Using Lean implies to the coding agent ‘proven’ is a pervasive goal.
I want to use lean to be more articulate about the goal. Instead using lean smuggled in a difficult to remove implicit requirement that everything everywhere must be proven.
This was obvious because the definitions i made in lean imply the exact opposite of everything needs to be proven. When i use morphism i mean anything that is a morphism not only things proven to be morphisms.
A coding agent driven by an llm needs a huge amount of structure to use what the math says rather than take on the implications that because it is using a proof system therefore everything everywhere is better if proven.
The initial way i used lean poisoned the satisficing structure that unfolds during a coding pass.
If you want to mess with this at home, I've been vibe coding https://github.com/kig/formalanswer to plug theorem provers into an LLM call loop. It's pretty early dev but it does have a logic rap battle mode.
I think I saw Terence Tao use a formal proof language but I don't remember if it was Lean. I'm not familiar with it but I do agree that moving to provable languages could improve AI but isn't the basis just having some immutable rigorous set of tests basically which could be replicated in "regular" programming languages?
Why is lean4 so slow with the main math package
The real value is in mixed mode:
- Lean supports calling out as a tactic, allowing you to call LLMs or other AI as judges (ie, they return a judgment about a claim)
- Lean can combine these judgments from external systems according to formal theories (ie, normal proof mechanics)
- an LLM engaged in higher order reasoning can decompose its thinking into such logical steps of fuzzy blocks
- this can be done recursively, eg, having a step that replaces LLM judgments with further logical formulations of fuzzy judgments from the LLM
Something, something, sheaves.
This has been the approach taken by some using LLMs, even in less type-heavy situations. Of course, it is part of a broader tradition in which search is combined with verification. Genetic programming and related areas come to mind. Here, LLMs are search, while Lean is used to express constraints.
> Large language models (LLMs) have astounded the world with their capabilities, yet they remain plagued by unpredictability and hallucinations – confidently outputting incorrect information. In high-stakes domains like finance, medicine or autonomous systems, such unreliability is unacceptable.
This misses a point that software engineers initmately know especially ones using ai tools:
* Proofs are one QA tool
* Unit tests, integration tests and browser automation are other tools.
* Your code can have bugs because it fails a test above BUT...
* You may have got the requirements wrong!
Working with claude code you can have productive loops getting it to assist you in writing tests, finding bugs you hadn't spotted and generally hardening your code.
It takes taste and dev experience definitely helps (as of Jan 26)
So I think hallucinations and proofs as the fix is a bit barking up the wrong tree
The solution to hallucinations is careful shaping of the agent environment around the project to ensure quality.
Proofs may be part of the qa toolkit for AI coded projects but probably rarely.
I like a lot of the idea behind such theorem provers, however, I always have issues with them producing compatible code with other languages.
This happened to me with idris and many others, I took some time to learn the basics, wrote some examples and then FFI was a joke or code generators for JavaScript absolutely useless.
So no way of leveraging an existing ecosystem.
[dead]
Machine learning is definitely enabling writing _proofs_ within a proof assistant, and I'm sure it will help to make formal verification more viable in the future.
Where it cannot (fully) replace humans, is writing the _theorems_ themselves. A human has to check that the theorem being proven is actually what you were trying to prove, and this is not safe from LLM hallucinations. If you ask an LLM, is this bridge safe, and it writes `Theorem bridge_is_safe : 1 + 1 = 2.` and proves this theorem, that does _not_ mean the bridge is safe...
The article then also makes some wild extrapolations:
> We could imagine an LLM assistant for finance that provides an answer only if it can generate a formal proof that it adheres to accounting rules or legal constraints.
I guess it's true because you could imagine this, hypothetically. But it's not going to happen, because you cannot formalize a financial or legal statement in a proof assistant. It's a fundamentally informal, real-world thing, and proof assistants are fundamentally for proving formal, abstract things.