I remember playing with Alpaca a few years ago, and it was fun though I didn’t find the resulting code to significantly less error-prone than when I wrote regular Erlang. It’s inelegant, but I find that Erlang’s quasi-runtime-typing with pattern matching gets you pretty far and it falls into Erlang’s “let it crash” philosophy nicely.
Honestly, and I realize that this might get me a bit of flack here and that’s obviously fine, but I find type systems start losing utility with distributed applications. Ultimately everything being sent over the wire is just bits. The wire doesn’t care about monads or integers or characters or strings or functors, just 1’s and 0’s, and ultimately I feel like imposing a type system can often get in the way more than it helps. There’s so much weirdness and uncertainty associated with stuff going over the wire, and pretty types often don’t really capture that.
I haven’t tried Gleam yet, and I will give it a go, and it’s entirely possible it will change my opinion on this, so I am willing to have my mind changed.
> Honestly, and I realize that this might get me a bit of flack here and that’s obviously fine, but I find type systems start losing utility with distributed applications. Ultimately everything being sent over the wire is just bits.
Actually Gleam somewhat shares this view, it doesn't pretend that you can do typesafe distributed message passing (and it doesn't fall into the decades-running trap of trying to solve this). Distributed computing in Gleam would involve handling dynamic messages the same way handling any other response from outside the system is done.
This is a bit more boilerplate-y but imo it's preferable to the other two options of pretending its type safe or not existing.
Interesting! I don't share that view at all — I mean, everything running locally is just bits too, right? Your CPU doesn't care about monads or integers or characters or strings or functors either. But ultimately your higher level code does expect data to conform to some invariants, whether you explicitly model them or not.
IMO the right approach is just to parse everything into a known type at the point of ingress, and from there you can just deal with your language's native data structures.
You seem to have a fundamental misunderstanding about type systems. Most (the best?) typesystems are erased. This means they only have meaning "on compile time", and makes sure your code is sound and preferrably without UB.
The "its only bits" thing makes no sense in the world of types. In the end its machine code, that humans never (in practice) write or read.
I don’t understand this comment, yes everything going over the wire is bits, but both endpoints need to know how to interpret this data, right? Types are a great tool to do this. They can even drive the exact wire protocol, verification of both data and protocol version.
So it’s hard to see how types get in the way instead of being the ultimate toolset for shaping distributed communication protocols.