logoalt Hacker News

blundergoatyesterday at 9:58 PM7 repliesview on HN

The real win here isn't TS over Rust, it's the O(N²) -> O(N) streaming fix via statement-level caching. That's a 3.3x improvement on its own, independent of language choice. The WASM boundary elimination is 2-4x, but the algorithmic fix is what actually matters for user-perceived latency during streaming. Title undersells the more interesting engineering imo.


Replies

azakaiyesterday at 11:46 PM

O(N²) -> O(N) was 3.3x faster, but before that, eliminating the boundary (replacing wasm with JS) led to speedups of 2.2x, 4.6x, 3.0x (see one table back).

It looks like neither is the "real win". both the language and the algorithm made a big difference, as you can see in the first column in the last table - going to wasm was a big speedup, and improving the algorithm on top of that was another big speedup.

nulltracetoday at 12:01 AM

Yeah the algorithmic fix is doing most of the work here. But call that parser hundreds of times on tiny streaming chunks and the WASM boundary cost per call adds up fast. Same thing would happen with C++ compiled to WASM.

catlifeonmarstoday at 3:22 AM

You’re not wrong, but that win would not get as many views. It’s not clickbaity enough

socalgal2yesterday at 10:55 PM

same for uv but no one takes that message. They just think "rust rulez!" and ignore that all of uv's benefits are algo, not lang.

show 2 replies
Aurornisyesterday at 10:45 PM

> Title undersells the more interesting engineering imo.

Thanks for cutting through the clickbait. The post is interesting, but I'm so tired of being unnecessarily clickbaited into reading articles.

srousseyyesterday at 10:42 PM

Yeah, though the n^2 is overstating things.

One thing I noticed was that they time each call and then use a median. Sigh. In a browser. :/ With timing attack defenses build into the JS engine.

show 1 reply
shmerlyesterday at 10:23 PM

More like a misleading clickbait.