logoalt Hacker News

Quadratic Micropass Type Inference

21 pointsby simvuxlast Friday at 8:37 PM6 commentsview on HN

Comments

nextaccountictoday at 8:11 PM

I think that a way to back up your claims is to compare the type errors from your approach with type errors from vanilla bidirectional type inference, in the same language.

Also: won't this type less programs than bidirectional type inference?

ux266478today at 7:00 PM

> I propose a new kind of type inference algorithm that always prioritises the type unifications the end-user is most likely to care about, independent from the types order in source code.

> If we could instead unify types in the order the end-user deems most important

I think the problem with this is that the desirable priority is context and programmer-dependent, and no real reason is given why ordering expressions is unacceptable. Also this approach apparently still isn't independent from the order of expressions, it just imposes additional structure on top of that with multiple inference passes, making the whole affair more rigid and more complicated. I can only guess this makes reasoning about higher order polymorphism much harder.

Having it ranked on order alone is something which is simple, easy to internalize, easy to reason about, and gives the utmost control to the programmer. For instance with the example given, this is the intuitive way I'd have constructed that function without even really thinking about it:

    fn example(x) -> Point[int] {
        let p = Point(x, x);
        log(["origin: ", x]);
        return p;
    }
netting the "ideal" error.

I think the problem here stems from an expectation that type inference is kind of a magic wand for getting something like dynamic typing. So then when no thought is put into the actual structure of the program WRT the type inference, what's instead gotten is an icky system that isn't Just Doing The Thing. Vibes-based-typing? If that's the goal, I wonder if it might be better served by fuzzy inference based on multiple-passes generating potential conclusions with various confidence scores.

edmundgoodmanlast Tuesday at 7:24 AM

This is really cool!! It looks interesting for making errors in complex type systems easier to debug, but the quadratic performance of the title sounds a bit worrying for productive compiler use — and imo the benchmarks don’t really mean anything without a point of reference to a traditional unification implementation.

If this system only provides benefits in the type-error path of the compiler I wonder if a traditional single-pass unification could be used for speed on the common path of code compiling without type errors, then when unification fails this slower multi-pass approach could be run on-demand to give better error reporting. This could lazily avoid the cost of the approach in most cases, and the cases in which it would be used are less latency critical anyway.

Also, I think there is a typo in one of the code blocks: ‘2 should be unified into (string, string) not just string afaict

show 1 reply