logoalt Hacker News

Why don't you use dependent types?

267 pointsby baruchellast Sunday at 3:06 PM111 commentsview on HN

Comments

lackerlast Sunday at 4:13 PM

Dependent types are very useful for some things. For example, I wish Python had the ability to express "a 10 x 5 matrix of float32s" as a type, and typecheck that.

The Curry-Howard correspondence, using dependent type system to have "proofs" be equivalent to "types", is powerful, but it can be really confusing. From a human point of view, there is a huge difference between "Your proof is wrong" and "You wrote a statement that fails typechecking".

Intuitively, when you make an error with types, it should be something fairly trivial that you just read the error and fix it up. When you make an error in a proof, it's understandable if it's very complicated and requires thought to fix. The natural UI is different.

So I agree with the author that the greatest benefit of Lean is not its typesystem per se, but its community. Specifically the fact that Lean's library of mathematics, mathlib, is organized like an open source community with pull requests. Whereas Isabelle's library of mathematics, the AFP, is organized like a scientific journal with referees.

I'm working on a dependent type system at the moment for a new theorem prover - Acorn, at https://acornprover.org - and my hope is to combine the good points of both Lean and Isabelle. It's nice that Lean has the power to cleanly express the simple dependent types that mathematicians often use, like vector spaces or quotients. But if you use dependent types too much then it does get complicated to debug what's happening.

show 7 replies
Gajurgensenlast Sunday at 5:09 PM

Very interesting. My takeaway is that Dr. Paulson's answer to the question is that there is not anything necessarily wrong with dependent types, but that he doesn't believe they are necessary.

I would have liked to read more about Lean's alleged performance issues, and the issues around intentional equality. For the latter, I understand one can run into the need for heterogeneous equality (https://lean-lang.org/doc/reference/latest/Basic-Proposition...) when types are propositionally equal, but not definitionally equal. It has been some time I worked seriously in a dependently-typed language, but I recall coming to the conclusion that dependent types are best used as little as possible, for exactly this reason. If something may be stated as a theorem after the fact instead of putting it in the type, that was my preference.

Certainly there is something strongly aesthetically appealing about dependent type theory. The unification of programs and proofs and the natural emergence of independent proof objects. I am open to the idea that overly-dogmatic insistence on a type-theoretic basis to a theorem prover could lead to pragmatic issues, but I'd need to see more examples to be convinced there is a better foundation.

Anyway, I agree with Dr. Paulson's point that dependent types aren't necessary to verify interesting systems. He talked more of pure mathematics, but I am more interested in software verification. I work heavily in ACL2 which, not only does it not have dependent types, it doesn't have static typing at all! It is, however, also a first order logic and the both of these facts can sometimes be frustrating. Various libraries have been introduced to simulate typing and higher-ordered reasoning.

show 4 replies
pronlast Sunday at 4:14 PM

It's interesting just how much of the debate in modern logic boils down to aesthetic preferences. On the other hand, I guess that if there were overwhelming practical advantages, there wouldn't be much to debate...

BTW, here's a "discussion paper" by Paulson and Leslie Lamport about typing in specification laguages from 1999: https://www.cl.cam.ac.uk/~lp15/papers/Reports/lamport-paulso.... Paulson represents the "pro-type" view, but note that since that paper was written, there have been developments in mechanised theorem proving of untyped formalisms, including in Lamport's own (TLA+).

show 1 reply
cwzwarichlast Sunday at 4:22 PM

The bigger problem with HOL (or simple type theory) is not the lack of dependencies, but rather the lack of logical strength. Simple type theory is equivalent in logical strength to bounded Zermelo set theory (i.e. ZF without Foundation or Replacement, and with Separation restricted to formulas with bounded quantifiers). This is unfortunately too weak to formalize post-WW2 mathematics in the same style as is done by ordinary mathematicians. Similarly, it does not offer a great way to deal with the size issues that arise in e.g. category theory.

show 3 replies
tombertlast Sunday at 10:09 PM

My abandoned PhD was in formal methods and I find dependent types to be pretty neat, but every time I've tried using a dependently typed language for anything it's always been an uphill battle.

When I used to do F# for a living, I was really pushing for using F*, but it was impossible to get any buy-in from coworkers because the learning curve was comparatively steep. I probably should have pushed a bit more, but I figured that I'd potentially have more success trying to push people to use TLA+ (though that was also unsuccessful).

That was unsuccessful as well, and I've come to the (admittedly very cynical conclusion) that software engineers, generally speaking, will never learn anything new if it involves "math" in any way, shape, or form. Once I realized that, it became much easier to lower my expectations of my coworkers.

show 2 replies
obeavslast Sunday at 4:23 PM

So, we've been down this rabbithole at Phosphor (phosphor.co) and have explored/made a couple of really big technology bets on it.

The most unique/useful applications of it in production are based on combining dependent types with database/graph queries as a means. This enables you to take something like RDF which is neat in a lot of ways but has a lot of limitations, add typing and logic to the queries, in order to generally reimagine how you think about querying databases.

For those interested in exploring this space from a "I'd like to build something real with this", I'd strongly recommend checking out TypeDB (typedb.com). It's been in development for about a decade, is faster than MongoDB for vast swaths of things, and is one of the most ergonomic frameworks we've found to designing complex data applications (Phosphor's core is similar in many ways to Palantir's ontology concept). We went into it assuming that we were exploring a brand new technology, and have found it to work pretty comprehensively for all kinds of production settings.

show 2 replies
griffzhowllast Sunday at 4:21 PM

Great, I love this stuff.

See here for a summary of the many results of the author and team's research project on formalization:

https://www.cl.cam.ac.uk/~lp15/Grants/Alexandria/

Especially interesting for me is the work on formalizing quantum computing algorithms and theorems (open access):

https://link.springer.com/article/10.1007/s10817-020-09584-7

Rileyenyesterday at 2:35 AM

I used to think dependent types were the future, but once I actually started working on real projects, I realized they might be logically elegant and reduce errors, but the development efficiency really takes a hit. These days, I care more about solutions that are clear, maintainable, and easy to get started with. If a tool gets the job done, can scale, and the team can understand it, then it’s a good tool.

throwthrow0987yesterday at 5:20 PM

So it seems the thesis of some pretty in the know mathematicians is that the secret of dependent types is knowing when not to use them. Is that necessarily an argument against Lean or Rocq? In the sense could one simply just not use the dependent types on offer in these languages in certain circumstances, and try emulate an Isabelle proof using custom built tactics?

a-salehyesterday at 7:46 AM

I think I mostly liked those type-systems that lean towards dependent, but not go all the way.

Purescript might be favourite? Even by default you get more power than i.e. vanilla Haskell, with row-types. But then you can get type-level list, typelevel string, even typelevel regex! And you use these through type-classes in a kind of logic-programming way.

stevanlast Sunday at 7:41 PM

> But people have regularly asked why Isabelle dispenses with proof objects. The two questions are essentially the same, because proof objects are intrinsic to all the usual type theories. They are also completely unnecessary and a huge waste of space.

I believe proof by reflection relies on proof objects? Georges Gonthier's proof of the four-colour theorem crucially uses proof by reflection.

show 1 reply
anonzzzieslast Sunday at 4:22 PM

You just prove / use dependently typed languages / tla+ where it makes sense, not for everything. The latter might make sense if it's mostly automated maybe, but it takes really painful elaborate work to get full coverage and for sure most stuff really doesn't need that. I always think these formal methods + unit/integration tests cover so much that you are already far more robust than most on earth.

netdevphoenixyesterday at 10:40 AM

Sounds like for most implementations of DTs, you have to go all in which is likely overkill for many LoB apps doing CRUD with some custom logic on it. The ideal would be to be able to delegate some modules onto a separate system using DTs and the rest using your good old OOP

show 1 reply
zozbot234last Sunday at 5:47 PM

The claim that dependently typed languages are inherently reliant on fully written-out proof objects looks quite wrong to me. You could easily imagine a proof term with opaque typed "holes" (written `_`) where the content of each "hole" is simply replaced by a LCF-like proof script that was somehow proven (in entirely unspecified ways, having to do with the peculiar programming language that the LCF-like checker uses for its implementation - so the soundness boundary has been expanded a lot, we have given up on having an easily checkable 'kernel'!) to generate some term of the correct type, starting from its environment. Since the content is opaque, no other part of the proof development can tell what exactly was in the hole, and we can dispense with writing that part of the proof term out.

show 2 replies
shiandowyesterday at 12:14 PM

Is it me or do dependent types look really similar to the axiom of choice?

fluffyponylast Sunday at 3:59 PM

Agree with this. The punchline here is not "dependent types bad", it is "choose your battles". Isabelle/HOL pushed frighteningly far without proof objects or dependent types, from schemes to BSG, and never hit the mythical wall. What moved the needle was automation, libraries, and legible proofs, not a fancier core calculus. Lean is great, but if the toolchain bogs down and equality games leak into your day, your fancy types are like Tesla FSD: impressive demo energy, unpredictable commute (no offense to anyone who uses it regularly). Knowing when not to use them is the real superpower imho.

If you need finely indexed invariants, sure, reach for DT. For the other 95%, HOL plus type classes and locales, backed by a small kernel and big libraries, will get you to production faster and with fewer regrets. Milner's LCF insight still pays the bills. And yes, croissants are delicious, but optional axioms are a risky breakfast.

boulevardlast Sunday at 9:20 PM

The real skill is knowing when not to make something dependent otherwise you just slow yourself down.

heikkilevantolast Sunday at 8:09 PM

I hate titles like "Why don't you use blah-blah". Usually because blah-blah might be an acceptable (maybe good?) solution to a problem which I don't have. Let me ask in return: Why should I even care about blah-blah. If the first (two?) paragraphs don't give a clear answer to that, never mind!

show 2 replies
golemotronlast Sunday at 6:19 PM

The juice isn't worth the squeeze.

Pxtllast Sunday at 11:04 PM

Same reason I don't use a variety of other good programming ideas: the tools I use don't support it usefully.