logoalt Hacker News

fc417fc802yesterday at 7:02 PM1 replyview on HN

I agree what you say seems reasonable at a glance. But (IIUC) the issue is that for optimization we want the compiler to assume that UB doesn't happen in order to constrain the possible code paths. So if it goes some distance down a possible execution branch and discovers UB it can trim the subtree. At that point "anything can happen" becomes an (approximate) reality.

The obvious counterpoint in this particular instance is that there's no good reason not to make such an awful expression a compile time error.

I also personally think that evaluation order should be strictly defined. I'm unclear if the current arrangement ever offers noticable benefits but it is abundantly clear that it makes the language more difficult to reason about.


Replies

IshKebabyesterday at 8:57 PM

As I understand it UB was not really intended to be for optimisation. It was so that C could compile on wildly different architectures that existed at the time.

Today we don't have nearly the variety of architectures, so they in theory C doesn't need nearly as much UB (like more modern languages).

Although there is one modern case where C's "anything goes" attitude has actually helped: CHERI works pretty well with C/C++ even though pointers are double the size they normally are, because doing so many things with pointers is UB (I assume because of segmented memory). CHERI is a slightly awkward target for Rust because Rust makes more assumptions about pointers - specifically that pointers and addresses are the same size.