It is amazing that big endian is almost dead.
It will be relegated to the computing dustbin like non-8-bit bytes and EBCDIC.
Main-core computing is vastly more homogenous than when I was born almost 50 years ago. I guess that's a natural progression for technology.
Now just UTF-16 and non '\n' newline types remaining to go
We'll have to deal with it forever in network protocols. Thankfully that's rather walled off from most software.
Big endian will stay around as long as IBM continues to put in the resources to provide first-class Linux support on s390x. Of course if you don’t expect your software to ever be run on s390x you can just assume little-endian, but that’s already been the case for the vast majority of software developers ever since Apple stopped supporting PowerPC.
Good call out, I have just removed some #ifdef about endianness from my engine.
> It is amazing that big endian is almost dead.
I wish the same applied to written numbers in LTR scripts. Arithmetic operations would be a lot easier to do that way on paper or even mentally. I also wish that the world would settle on a sane date-time format like the ISO 8601 or RFC 3339 (both of which would reverse if my first wish is also granted).
> It will be relegated to the computing dustbin like non-8-bit bytes and EBCDIC.
I never really understood those non-8-bit bytes, especially the 7 bit byte. If you consider the multiplexer and demux/decoder circuits that are used heavily in CPUs, FPGAs and custom digital circuits, the only number that really makes sense is 8. It's what you get for a 3 bit selector code. The other nearby values being 4 and 16. Why did they go for 7 bits instead of 8? I assume that it was a design choice made long before I was even born. Does anybody know the rationale?