logoalt Hacker News

cubefoxyesterday at 5:29 PM1 replyview on HN

> The title is slightly misleading, I mean yes, naive binary search might have larger constant but the algorithm is still O(log(n)).

I think the title is not misleading since the Big O notation is only supposed to give a rough estimate of the performance of an algorithm.

(I agree though that binary search is already extremely fast, so making something twice as fast won't move the needle for the vast majority of applications where the speed bottleneck is elsewhere. Even infinite speed, i.e. instant sorted search, would likely not be noticeable for most software.)


Replies

senfiajyesterday at 5:47 PM

For me it's slightly misleading because it's almost like saying "I wrote a faster quicksort implementation, so it beats quicksort!". In this case the binary search fundamental idea of "divide and conquer" is still there, the article just does microptimizations (which seem to be not very portable and are less relevant/applicible for more complex data structures) in order to reduce the constant part.

Yes, algorithmic complexity is theoretical, it often ignores the real world constants, but they are usually useful when comparing algorithms for larger inputs, unless we are talking about "galactic algorithms" with insanely large constants.