> computer science students should be familiar with the standard f(x)=O(g(x)) notation
I have always thought that expressing it like that instead of f(x) ∈ O(g(x)) is very confusing. I understand the desire to apply arithmetic notation of summation to represent the factors, but "concluding" this notation with equality, when it's not an equality... Is grounds for confusion.
Given this possible confusion, is it still valid to say the following two expressions are equivalent as the article does?
f(x) = g(x) + O(1)
f(x) - g(x) = O(1)
you're confused because it isn't a set
it's a notation for "some element of that set"