That definition is as I said: "something about which no conclusions can be drawn because the proposed definitions lack sufficient precision and completeness."
"Highly autonomous systems" and "most economically valuable work" aren't precise enough to be useful.
"Highly" implies that there is a continuum, so where does directed end and autonomy begin?
"Most economically valuable work"... each word in that has wiggle room, not to mention that any reasonable interpretation of it is a shifting goalpost as the work done by humans over history has shifted a great deal.
The point is that none of this is defined in a way so that people can agree that something has AGI/ASI/etc. or not. If people can't agree then there's no point in talking about it.
EDIT: interestingly, the OpenAI definition of AGI specifically means that a subset of humans do not have AGI.
It's a definition based on practical results. That's a good definition, because it doesn't require we already know the exact implementation. It doesn't require guessing, in a literal "put your money where your mouth is" way.
If it can do things as good as or better than humans, then either the AI has a type of general intelligence or the human does not.
Defining capabilities based on outcome rather than implementation should be very familiar to an engineer, of any kind, because that's how every unsolved implementation must start.
I think you can say if human engineers still exist, it's hard to claim we have AGI. If human engineers have been entirely replaced, then it's hard to claim we don't have AGI.