logoalt Hacker News

steveklabnikyesterday at 1:44 PM1 replyview on HN

In the spirit of the article... there's a few ways in which this could go :)

The first is, we do have some amount of empirical evidence here: Rust had to turn its aliasing optimizations on and off again a few times due to bugs in LLVM. A comment from 2021: https://github.com/rust-lang/rust/issues/54878#issuecomment-...

> When noalias annotations were first disabled in 2015 it resulted in between 0-5% increased runtime in various benchmarks.

This leaves us with a few relevant questions:

Were those benchmarks representative of real world code? (They're not linked, so we cannot know. The author is reliable, as far as I'm concerned, but we have no way to verify this off-hand comment directly, I link to it specifically because I'd take the author at their word. They do not make any claim about this, specifically.)

Those benchmarks are for Rust code with optimizations turned off and back on again, not Rust code vs C code. Does that make this a good benchmark of the question, or a bad one?

These were llvm's 'noalias' markers, which were written for `restrict` in C. Do those semantics actually take full advantage of Rust's aliasing model, or not? Could a compiler which implements these optimizations in a different way do better? (I'm actually not fully sure of the latest here, and I suspect some corners would be relying on the stacked borrows vs tree borrows stuff being finalized)


Replies

Measteryesterday at 2:34 PM

Another issue we have to consider here for the measurements taken then is that it was miscompiling, which, to me, calls into question how much we can trust that performance change.

Additionally, it was 10 years ago and LLVM has changed. It could be that LLVM does better now, or it could do worse. I would actually be interested in seeing some benchmarks with modern rustc.