It's a definition based on practical results. That's a good definition, because it doesn't require we already know the exact implementation. It doesn't require guessing, in a literal "put your money where your mouth is" way.
If it can do things as good as or better than humans, then either the AI has a type of general intelligence or the human does not.
Defining capabilities based on outcome rather than implementation should be very familiar to an engineer, of any kind, because that's how every unsolved implementation must start.
What is the as-of date on what work is economically valuable and how much is available?
Do you know how an LLM works? Can you describe it?
By your definition every machine has a type of general intelligence. Not just a bog standard calculator, but also my broom. It doesn't matter if you slap "smart" on the side, I'm not going to call my washing machine "intelligent". Especially considering it's over a decade old.
I don't think these definitions make anything any clearer. If anything, they make them less. They equate humans to mindless automata. They create AGI by sly definition and let the proposer declare success arbitrarily.