logoalt Hacker News

tsimionescuyesterday at 9:08 PM0 repliesview on HN

Again, the worse case complexity can be O(N), but it can also be Big-Omega(N^2). Note that Knuth talks about the magnitude of a function, not of the runtime of an algorithm. The worse case complexity of an algorithm is a function, call it f_worse. The best case complexity of that same algorithm is a different function, call it f_best. It's just as meaningful to talk about O(f_worse) as it is to talk of O(f_best).

This is actually pretty common in CS. For example, people will often say that the typical case complexity of Quicksort is O(N log N), even though the worse case complexity is O(N^2). I doubt you'll find anyone who can tell you off the top of their head what is the actual complexity function of quicksort, and that will depend anyway on details of how the exact algorithm is implemented, even in pseudocode (will it do N reads, or 2N reads?).

Let's take a simple example: linear search. The worse case complexity is that we need to read every element from the array, and then compare it to the desired value, and we never find the desired value. So, in the worse case, the runtime is 2N (N reads + N comparisons) - f_worse(n) = 2n. In the best case, we read the first item in the array, we compare it to the pivot, and it is equal - so we need 2 operations - f_best(n) = 2. Now, we can say that O(f_worse(n)) = O(2n) = O(n). And we can also say that O(f_best(n)) = O(2) = O(1). So the best case complexity for linear search is O(1), and the worse case complexity is O(n). We never want to remember details like this to say things like "the best case complexity of linear search is 2".