Even there it's risky. LLMs are good at subtly misstating the problem, so it's relatively easy to make them prove things which look like the thing you wanted but which are mostly unrelated.
Yes, Lean only lets you be confident in the contents of the proof, not how it was formed. But, I still think that's pretty cool and valuable.
Yes, Lean only lets you be confident in the contents of the proof, not how it was formed. But, I still think that's pretty cool and valuable.