> For 99% of tasks I'm totally certain there's people out there that are orders of magnitude better at them than me.
And LLMs slurped some of those together with the output of thousands of people who’d do the task worse, and you have no way of forcing it to be the good one every time.
> If the AI can regurgitate their thinking, my output is better.
But it can’t. Not definitively and consistently, so that hypothetical is about as meaningful as “if I had a magic wand to end world hunger, I’d use it”.
> Humans may not need to think to just... do stuff.
If you don’t think to do regular things, you won’t be able to think to do advanced things. It’s akin to any muscle; you don’t use it, it atrophies.
> And LLMs slurped some of those together with the output of thousands of people who’d do the task worse, and you have no way of forcing it to be the good one every time.
That's solvable though, whether through changing training data or RL.