I'm not sure I follow this.
> WebAssembly is the wrong abstraction for running untrusted apps in a browser
WebAssembly is a better fit for a platform running untrusted apps than JS. WebAssembly has a sandbox and was designed for untrusted code. It's almost impossible to statically reason about JS code, and so browsers need a ton of error prone dynamic security infrastructure to protect themselves from guest JS code.
> Browser engines evolve independently of one another, and the same web app must be able to run in many versions of the same browser and also in different browsers. Dynamic typing is ideal for this. JavaScript has dynamic typing.
There are dynamic languages, like JS/Python that can compile to wasm. Also I don't see how dynamic typing is required to have API evolution and compt. Plenty of platforms have static typed languages and evolve their API's in backwards compatible ways.
> Browser engines deal in objects. Each part of the web page is an object. JavaScript is object oriented
The first major language for WebAssembly was C++, which is object oriented.
To be fair, there are a lot of challenges to making WebAssembly first class on the Web. I just don't think these issues get to the heart of the problem.
> WebAssembly has a sandbox and was designed for untrusted code.
So does JavaScript.
> It's almost impossible to statically reason about JS code, and so browsers need a ton of error prone dynamic security infrastructure to protect themselves from guest JS code.
They have that infrastructure because JS has access to the browser's API.
If you tried to redesign all of the web APIs in a way that exposes them to WebAssembly, you'd have an even harder time than exposing those APIs to JS, because:
- You'd still have all of the security troubles. The security troubles come from having to expose API that can be called adversarially and can pass you adversarial data.
- You'd also have the impedence mismatch that the browser is reasoning in terms of objects in a DOM, and WebAssembly is a bunch of integers.
> There are dynamic languages, like JS/Python that can compile to wasm.
If you compile them to linear memory wasm instead of just running directly in JS then you lose the ability to do coordinated garbage collection with the DOM.
If you compile them to GC wasm instead of running directly in JS then you're just adding unnecessary overheads for no upside.
> Also I don't see how dynamic typing is required to have API evolution and compt.
Because for example if a browser changes the type of something that happens to be unused, or removes something that happens to be unused, it only breaks actual users at time of use, not potential users at time of load.
> Plenty of platforms have static typed languages and evolve their API's in backwards compatible ways.
We're talking about the browser, which is a particular platform. Not all platforms are the same.
The largest comparable platform is OSes based on C ABI, which rely on a "kind" of dynamic typing (stringly typed, basically - function names in a global namespace plus argument passing ABIs that allow you to mismatch function signature and get away with it.
> The first major language for WebAssembly was C++, which is object oriented.
But the object orientation is lost once you compile to wasm. Wasm's object model when you compile C++ to it is an array of bytes.
> To be fair, there are a lot of challenges to making WebAssembly first class on the Web. I just don't think these issues get to the heart of the problem.
Then what's your excuse for why wasm, despite years of investment, is a dud on the web?
There's something real in the impedance mismatch argument that I think the replies here are too quick to dismiss. The browser's programming model is fundamentally about a graph of objects with identity, managed by a GC, mutated through a rich API surface. Linear memory is genuinely a poor match for that, and the history of FFI across mismatched memory models (JNI, ctypes, etc.) tells us this kind of boundary is where bugs and performance problems tend to concentrate. You're right to point at that.
Where I think the argument goes wrong is in treating "most websites don't use WASM" as evidence that WASM is a bad fit for the web. Most websites also don't use WebGL, WebAudio, or SharedArrayBuffer. The web isn't one thing. There's a huge population of sites that are essentially documents with some interactivity, and JS is obviously correct for those. Then there's a smaller but economically significant set of applications (Figma, Google Earth, Photoshop, game engines) where WASM is already the only viable path because JS can't get close on compute performance.
The component model proposal isn't trying to replace JS for the document-web. It's trying to lower the cost of the glue layer for that second category of application, where today you end up maintaining a parallel JS shim that does nothing but shuttle data across the boundary. Whether the component model is the right design for that is a fair question. But "JS is the right abstraction" and "WASM is the wrong abstraction" aren't really in tension, because they're serving different parts of the same platform.
The analogy I'd reach for is GPU compute. Nobody argues that shaders should replace CPU code for most application logic, but that doesn't make the GPU a "dud" or a second-class citizen. It means the platform has two execution models optimized for different workloads, and the interesting engineering problem is making the boundary between them less painful.