The article argues that shared memory and message passing are the same thing because they share the same classes of potential failure modes.
Isn't it more like, message passing is a way of constraining shared memory to the point where it's possible for humans to reason about most of the time?
Sort of like rust and c. Yes, you can write code with 'unsafe' in rust that makes any mistake c can make. But the rules outside unsafe blocks, combined with the rules at module boundaries, greatly reduce the m * n polynomial complexity of a given size of codebase, letting us reason better about larger codebases.
Tangentially related: I haven’t seen DragonflyBSD talked about on HN in a long while but wasn’t it a split from FreeBSD to be built entirely around message passing as the core construct.
And with the tiny team working on it, it has remarkable performance.
> Isn't it more like, message passing is a way of constraining shared memory to the point where it's possible for humans to reason about most of the time?
That's a good way to look at it. A processes's mailbox is shared mutable state, but restrictions and conventions make a lot of things simpler when a given process owns its statr and responds to requests than when the requesters can access the state in shared memory. But when the requests aren't well thought out, you can build all the same kinds of issues.
Let's say you have a process that holds an account balance. If requests are deposit X or withdrawl Y, no problem (other than two generals). If instead requestors get balance, adjust and then send a set balance, you have a classic race condition.
ETS can be mentally modeled as a process that owns the table (even though the implementation is not), and the same thing applies... if the mutations you want to do aren't available as atomic requests or you don't use those facilities, the mutation isn't atomic and you get all the consequences that come with that.
Circular message passing can be an easy mistake to make in some applications, too.
Exactly, reading TFA and its prequel, can't shake the feeling that the author doesn't really understand concurrency.
The main purpose of synchronization is creating happens-before (memory/cache coherence) relationships between lines of code that aren't in the same program order. Go channels are just syntactic sugar for creating these happens-before relationships. Problems such as deadlocks and races (at least in the way that TFA calls them out) are irreducible complexity if you're executing two sequences of logical instructions in parallel. If you're passing data in whatever way, there is no isolation between those two sequences. All you can enforce is degrees of discipline.
It's typical AI slop. I'd recommend for the author (or anyone else) to watch Jenkov's course[1] first if they have an honest interest in the topic.
[1] https://www.youtube.com/playlist?list=PLL8woMHwr36EDxjUoCzbo...
The prequel “Message Passing Is Shared Mutable State” makes the claim that highly scrutinized go codebases had just as many message passing bugs (using go channels) as shared memory bugs. But then this article claims the Erlang community has a record of higher quality and reliability largely through discipline and convention.