What's the reason that C didn't define the order of this?
The horrible undefined behavior of signed integer overflow at least can be explained by the fact that multiple CPU architectures handling those differently existed (though the fact that C even 'attracts' its ill-defined signed integers when you're using unsigned ones by returning a signed int when left shifting an uint16_t by an uint16_t for example is not as forgivable imho)
But this here is something that could be completely defined at the language level, there's nothing CPU dependent here, they could have simply stated in the language specification that e.g. the order of execution of statements is from left to right (and/or other rules like post increment happens after the full statement is finished for example, my point is not whether the rule I type here is complete enough or not but that the language designers could have made it completely defined).
Sethi-Ullman register allocation reorders subexpression evaluation to achieve efficient register allocation: https://dl.acm.org/doi/10.1145/321607.321620
With modern register allocators and larger register sets, code generation impact from following source evaluation is of course lower than it used to be. Some CPUs can even involve stack slots in register renaming: https://www.agner.org/forum/viewtopic.php?t=41
On the other hand, even modern Scheme leaves evaluation order undefined. It's not just a C issue.
Applying the increment or decrement operators over the same variable more than once on the same line should be a compile-time error.
Anyway, yes, this one example has an obvious order it should be applied. But still, something like it shouldn't be allowed.
It's valuable for compilers to be able to choose the instruction scheduling order. Standards authors try not to unnecessarily bind implementors. If post increment happened after the full statement is finished, then the original value has to be maintained until the next sequence point. Maybe the compiler will be smart enough to elide that, maybe not, but it's a lot more difficult to fix those kinds of edge cases than to say sequencing is undefined.
The C standard doesn't define things where two or more historical compilers disagreed and there wasn't an obviously correct way. This is defined behavior (left to right, assignment last) in Java, which is a different language.
Probably because when C was standardised there were already multiple implementations, and this was an area where implementations differed but it wasn't viewed as important enough to bring them in line with one approach.
The only other reasonable option is to make such garbage a compile time error. There is no reasonable definition of what code like that should do and if you write it in the real world you need find better job fit. I'd normally say McDonald's is hiring, but they don't want people like that either
> What's the reason that C didn't define the order of this?
I didn't open TFA but my first thought was "Is this even defined?".
It kinda make sense that suck fucktardedness could be not defined.
It's defined. And called "operator precedence", both post/pre-increment have a higher precedence than the single "+".
At least according to this: https://en.wikipedia.org/wiki/Operators_in_C_and_C%2B%2B#Exp...
I think the main confusion here comes from the fact that "a" is just a value, not a pointer, where it matters when the value/address which the pointer points at is accessed (before of after the increment of the pointer's own 'value').
Anyway… my C skills are rusty. Maybe I get it wrong. :) In any case I always would use brackets to avoid any ambiguity in constructs like this.
The short answer is because C was designed to give leeway to really dumb compilers on really diverse hardware.
This isn't quite the same case, but it's a good illustration of the effect: on gcc, if you have an expression f(a(), b()), the order that a and b get evaluated is [1] dependent on the architecture and calling-convention of f. If the calling convention wants you to push arguments from right to left, then b is evaluated first; otherwise, a is evaluated first. If you evaluate arguments in the right order, then after calling the function, you can immediately push the argument on the stack; in the wrong order, the result is now a live variable that needs to be carried over another function call, which is a couple more instructions. I don't have a specific example for increment/decrement instructions, but considering extremely register-poor machines and hardware instruction support for increment/decrement addressing modes, it's not hard to imagine that there are similar cases where forcing the compiler to insert the increment at the 'wrong' point is similarly expensive.
Now, with modern compilers using cross-architecture IRs as their main avenue of optimization, the benefit from this kind of flexibility is very limited, especially since the penalties on modern architectures for the 'wrong' order of things can be reduced to nothing with a bit more cleverness. But compiler developers tend to be loath to change observable behavior, and the standards committee unwilling to mandate that compiler developers have to modify their code, so the fact that some compilers have chosen to implement it in different manners means it's going to remain that way essentially forever. If you were making a new language from scratch, you could easily mandate a particular order of evaluation, and I imagine that every new language in the past several decades has in fact done that.
[1] Or at least was 20 years ago, when I was asked to look into this. GCC may have changed since then.