Human brains aren’t magic in the literal sense but do have a lot of mechanisms we don’t understand.
They’re certainly special both within the individual but also as a species on this planet. There are many similar to human brains but none we know of with similar capabilities.
They’re also most obviously certainly different to LLMs both in how they work foundationally and in capability.
I definitely agree with the materialist view that we will ultimately be able to emulate the brain using computation but we’re nowhere near that yet nor should we undersell the complexity involved.
I agree we shouldn't undersell or underestimate the complexity involved, but when LLM's start contributing significant ideas to scientists and mathematicians, its time to recognize that whatever tricks are used in biology (humans, octopuses, ...) may still be of interest and of value, but they no longer seem like the unique magical missing ingredients which were so long sought after.
From this point on its all about efficiencies:
modeling efficiency: how do we best fit the elephant, with bezier curves, rational polynomials, ...?
memory bandwidth training efficiency: when building coincidence statistics, say bigrams, is it really necessary to update the weights for all concepts? a co-occurence of 2 concepts should just increase the predicted probability for the just observed bigram and then decrease a global coefficient used to scale the predicted probabilities. I.e. observing a baobab tree + an elephant in the same image/sentence/... should not change the relative probabilities of observing french fries + milkshake versus bicycle + windmill. This indicates different architectures should be possible with much lower training costs, by only updating weights of the concepts observed in the last bigram.
and so on with all other kinds of efficiencies.
ofc, and probably will never understand because of sheer complexity. It doesn't mean we can't replicate the output distribution through data. Probably when we do in efficient manners, the mechanisms (if they are efficient) will be learned too.
When someone says "AIs aren't really thinking" because AIs don't think like people do, what I hear is "Airplanes aren't really flying" because airplanes don't fly like birds do.