logoalt Hacker News

zozbot234yesterday at 4:16 PM1 replyview on HN

> ... It's optimized for rewriting a formula many times.

It's not just "rewriting" arbitrarily either, but rewriting according to well-known rules of expression manipulation such as associativity, commutativity, distributivity of various operations, the properties of equality and order relations, etc. It's precisely when you have such strong identifiable properties that you tend to resort to operator-like notation in any formalism (including a programming language) - not least because that's where a notion of "rewriting some expression" will be at its most effective.

(This is generally true in reverse too; it's why e.g. text-like operators such as fadd() and fmul() are far better suited to the actual low-level properties of floating-point computation than FORTRAN-like symbolic expressions, which are sometimes overly misleading.)


Replies

kragenyesterday at 5:41 PM

Hmm, I'm not sure whether operator-like notation has any special advantage for commutativity and distributivity other than brevity. a + b and add(a, b) are equally easy to rewrite as b + a and add(b, a).

Maybe there is an advantage for associativity, in that rewriting add(a, add(b, c)) as add(add(a, b), c) is harder than rewriting a + b + c as a + b + c. Most of the time you would have just written add(a, b, c) in the first place. That doesn't handle a + b - c (add(a, sub(b, c)) vs. sub(add(a, b), c)) but the operator syntax stops helping in that case when your expression is a - b + c instead, which is not a - (b + c) but a - (b - c).

Presumably the notorious non-associativity of floating-point addition is what you're referring to with respect to fadd() and fmul()?

I guess floating-point multiplication isn't quite commutative either, but the simplest example I could come up with was 0.0 * 603367941593515.0 * 2.9794309755910265e+293, which can be either 0 or NaN depending on how you associate it. There are also examples where you lose bits of precision to gradual underflow, like 8.329957634267304e-06 * 2.2853928075274668e-304 * 6.1924494876619e+16. But I feel like these edge cases matter fairly rarely?

On my third try I got 3.0 * 61.0 * 147659004176083.0, which isn't an edge case at all, and rounds differently depending on the order you do the multiplications in. But it's an error of about one part in 10⁻¹⁶, and I'd think that algorithms that would be broken by such a small amount of rounding error are mostly broken in floating point anyway?

I am pretty sure that both operators are commutative.

show 1 reply