> The complexity of binary search in terms of "search" (comparison) operations is exactly log_2(n)+1, not just O(n)
> So not exactly "n" as in O(n).
For large enough inputs the algorithm with better Big O complexity will eventually win (at least in the worst cases). Yes, sometimes it never happens in practice when the constants are too large. But say 100 * n * log(n) will eventually beat 5 * n for large enough n. Some advanced algorithms can use algorithms with worse Big O complexity but smaller constants for small enough sub-problems to improve performance. But it's more like to optimization detail rather than a completely different algorithm.
> This algorithm just uses modern and current processor architecture artifacts to "improve" it on arrays of up to 4096
Yes, that's my point. It's basically "I made binary search for integers X times faster on some specific CPUs". "Beating binary search" is somewhat misleading, it's more like "microptimizing binary search".