logoalt Hacker News

What is f(x) ≤ g(x) + O(1)? Inequalities With Asymptotics

35 pointsby ibobevlast Friday at 1:55 PM24 commentsview on HN

Comments

qsortyesterday at 11:41 PM

I think the confusion is because strictly speaking $f(x) = O(g(x))$ is an abuse of notation. $O(g(n)), \Theta(g(n))$ and friends are sets. We can't say that a function equals a set, or that a function "is less" than another function, but notoriously mathematics runs on javascript, so we try to do something instead of giving a type error.

Here "is less" is interpreted as "eventually less for all values" and "plus a set" is interpreted as "plus any function of that set".

I never liked this notation for asymptotics and I always preferred the $f(x) \in O(g(x))$ style, but it's just notation in the end.

show 6 replies
josalhoryesterday at 11:38 PM

> computer science students should be familiar with the standard f(x)=O(g(x)) notation

I have always thought that expressing it like that instead of f(x) ∈ O(g(x)) is very confusing. I understand the desire to apply arithmetic notation of summation to represent the factors, but "concluding" this notation with equality, when it's not an equality... Is grounds for confusion.

show 2 replies
dataflowtoday at 12:49 AM

Maybe an easier explanation: just subtract g(x) from both sides.

You get:

  f(x) - g(x) ≤ O(1)
Now, if you already know that

  f(x) - g(x) = O(1)
means "f and g eventually differ by no more than a constant", then

  f(x) - g(x) ≤ O(1)
must mean "f eventually stops exceeding g by a constant".
bo1024today at 12:39 AM

The easiest way to read it is "there exists a function h in O(1) such that f(x) <= g(x) + h(x)."

I think first we should teach "f in O(g)" notation, then teach the above, then observe that a special case of the above is the "abuse of notation" f(x) = O(g(x)).

tokenlesstoday at 1:58 AM

You do things a bit different on an actual computer, right. As x cannot tend to infinity (so everything is O(1) by that measure) so common sense is applied.

show 1 reply
edflsafoiewqtoday at 12:28 AM

O(1) just means "a bounded function (on a neighborhood of infinity)". Unlike f(x), which refers to some function by name, O(1) refers to some function by a property it has. It's the same principle at work in "even + odd = odd".

Programmers wringing their hands over the meaning of f(x)=O(g(x)) never seem to have manipulated any expression more complex than f(x)=O(g(x)).

nicggckgcbnntoday at 2:42 AM

[dead]

jherikotoday at 12:03 AM

[dead]