I like this, but unfortunately it doesn't solve one annoying problem: lexical scope doesn't work and it will fail in an unexpected way.
If you reference something lexically, your code fails at runtime. Want to use an import? You have to use import() inside the closure you pass to spawn(). Typescript doesn't know this. Your language server doesn't know this. Access a variable that shadows a built in global? Now you're accessing the built in global.
The only way this could even be addressed is by having a full on parser. Even then you can't guarantee things will work.
I think the only "fix" is for JS to introduce a new syntax to have a function that can't access lexical scope, returning a value that either extends a subclass of Function or has a cheeky symbol set on it. At least then, it'll fail at compile time.
This part is beautiful:
> Serialization Protocol: The library uses a custom "Envelope" protocol (PayloadType.RAW vs PayloadType.LIB). This allows complex objects like Mutex handles to be serialized, sent to a worker, and rehydrated into a functional object connected to the same SharedArrayBuffer on the other side.
It's kinda "well, yes, you can't share objects, but you can share memory. So make objects that are just thin wrappers around shared memory"
I'd be interested to see a comparison with https://piscinajs.dev/ - does this achieve more efficient data passing for example?
Lack of easy shared memory has always felt like a problem to me in this space, as often the computation I want to off-load requires (or returns) a lot of data.
This looks great. If it works as well as the readme suggests, this’ll let me reach for Bun in some of the scenarios where I currently reach for Go. Typescript has become my favorite language, but the lack of efficient multithreading is sometimes a deal breaker.
This is incredible! The SharedJsonBuffer got me all excited!
Writing module bundlers in Javascript had diminishing returns from multi threading because of the overhead of serializing and deserializing ASTs.
I wonder how far something like this would push the ceiling. Would love to see some benchmarks of this thing hauling ASTs around.
This seems very much worth a look!
(I suspect, to paraphrase Greenspun's rule, any sufficiently complicated app using Web Workers contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of this library...)
I'm confused why drop() is a function that you have to import inside the closure instead of a method.
From an overall system point of view, this is the current pinnacle of footgun design.
The OS does thread management and scheduling, facilitates IPC, locking, etc. All of this is one big largely-solved problem (at least for the kind of things most people are doing in JavaScript today). But because of history, we now have a very popular language and runtimes that are trying to replicate all these features, reinventing wheels, and adding layers on inefficiency to the overall execution.
Sigh.
Documentation here is exceptionally well written for a JS project, although move() doing different things depending on the type of data you pass to it feels like a foot-gun, and also how is it blocking access to arrays you pass to it?
The implementation of the shared json buffer is nuts
This is cool! Hope we can get multi-threaded wasm some time soon.
I’ve played around with webworkers and just could never seem to get over the latency issues