My abandoned PhD was in formal methods and I find dependent types to be pretty neat, but every time I've tried using a dependently typed language for anything it's always been an uphill battle.
When I used to do F# for a living, I was really pushing for using F*, but it was impossible to get any buy-in from coworkers because the learning curve was comparatively steep. I probably should have pushed a bit more, but I figured that I'd potentially have more success trying to push people to use TLA+ (though that was also unsuccessful).
That was unsuccessful as well, and I've come to the (admittedly very cynical conclusion) that software engineers, generally speaking, will never learn anything new if it involves "math" in any way, shape, or form. Once I realized that, it became much easier to lower my expectations of my coworkers.
oh, people will learn new stuff. they do all the time.
i think they simply thought it was not worth it. in fine we have limited time on this planet.
I would be interested to read the author's take on F. Since they are focused on math applications in this post, and F is very much focused on verified software on not on mathematics, it is a bit off topic. My (inexpert) understanding is that F* is based on an extensional type theory (different that CoC in this way) and can, but needn't, work with proof objects, since its refinement typing can just push constraints into the SMT database. I've found it pretty light weight and "opt-in" on the dependent types, and so it seems quite nice from a usability perspective IMO. But, as I say, I think it really would not address the author's question "how do we know it's right". On this point, I think the worlds of software verification and automated theorem proving for mathematics can be quite far apart.