LLM gets things right, when it does, due to the sheer massive information ingested during training, it can use probabilities to extract a right answer from deep in the model.
Humans on the other hand have developed a more elaborate scheme to process, or reason, data without having to read through 1 billion math problems and stack overflow answers. We listen to some explanations, a YT video, a few exercises and we're ready to go.
The fact that we may get similar grades (at ie high school math) is just a spot coincidence of where both "species" (AI x Human) are right now at succeeding. But if we look closer at failure, we'll see that we fail very differently. AI failure right now looks, to us humans, very nonsensical.
While I'd agree human failures are different from AI failures, human failures are necessarily also nonsensical. Familiar, human, but nonsensical — consider how often a human disagreeing with another will use the phrase "that's just common sense!"
I think the larger models are consuming in the order of 100k as much as we do, and while they have a much broader range of knowledge, it's not 100k as much breadth.
Nah, human failures look equally nonsensical. You're just more attuned to use their body language or peer judgement to augment your reception. Really psychotic humans can bypass this check.
> Humans on the other hand have developed a more elaborate scheme to process, or reason [ ... ] We listen to some explanations, a YT video, a few exercises
Frequent repetition in the sociological context has been the learning technique for our species. To paraphrase Feynman, learning is transferring.