> The claim that the code is inefficient is really not substantiated well in this blog post. Sometimes, long-winded assembly actually runs faster because of pipelining, register aliasing, and other quirks.
I had a case back in the 2010s where I was trying to optimize a hot loop. The loop involved an integer division by a factor which was common for all elements, similar to a vector normalization pass. For reasons I don't recall, I couldn't get rid of the division entirely.
I saw the compiler emitted an "idiv [mem]" instruction, and I thought surely that was suboptimal. So I reproduced the assembly but changed the code slightly so I could have "idiv reg" instead. All it involved was loading the variable into an unused register before the loop and use that inside the loop.
So I benchmarked it and much to my surprise it was a fair bit slower.
I thought I might have been due to loop target alignment, so I spent some time inserting no-ops to align things in various supposedly optimal ways, but it never got as fast. I changed my assembly to mirror what the compiler had spit out and voila, back to the fastest speed again...
Tried to ask around, and someone suggested it had to do with some internal register load/store contention or something along those lines.
At that point I knew I was done optimizing code by writing assembly. Not my cup of tea.
Not to suggest you weren't competent, but did you consider and try and control for the fact that your measurement could be the problem?
If you're doing enough divisions with the same divisor, it'd be faster to do what compilers do for division by a known constant, where they multiply by an integer reciprocal and shift