My internal mnemonic for targeting AI correctly is 'It's easier to change a problem into something AI is good at, than it is to change AI into something that fits every problem.'
But unfortunately the nuances in the former require understanding strengths and weaknesses of current AI systems, which is a conversation the industry doesn't want to have while it's still riding the froth of a hype cycle.
Aka 'any current weaknesses in AI systems are just temporary growing pains before an AGI future'
> 'any current weaknesses in AI systems are just temporary growing pains before an AGI future'
I see we've met the same product people :)