Machine learning is definitely enabling writing _proofs_ within a proof assistant, and I'm sure it will help to make formal verification more viable in the future.
Where it cannot (fully) replace humans, is writing the _theorems_ themselves. A human has to check that the theorem being proven is actually what you were trying to prove, and this is not safe from LLM hallucinations. If you ask an LLM, is this bridge safe, and it writes `Theorem bridge_is_safe : 1 + 1 = 2.` and proves this theorem, that does _not_ mean the bridge is safe...
The article then also makes some wild extrapolations:
> We could imagine an LLM assistant for finance that provides an answer only if it can generate a formal proof that it adheres to accounting rules or legal constraints.
I guess it's true because you could imagine this, hypothetically. But it's not going to happen, because you cannot formalize a financial or legal statement in a proof assistant. It's a fundamentally informal, real-world thing, and proof assistants are fundamentally for proving formal, abstract things.
Yes.
Here is another way to think of this. We all understand that the value of a lawyer in contract negotiations lies not only in drafting a document that, when fed to judge, produces the desired outcome. Rather, lawyers help clients (and counterparties) decide on what their interests consist in.
Developing software is always something of a principal-agent coordination problem, and comes with transaction costs.
Much of the time, most of us labor under the illusion that each of us understands our desires and interests better than any other party could.