LLMs are inherently emulators of digitaly imprinted artifacts of human consciousness. When people trully grasp what this means they will stop being buffled by the fact that LLMs performance deteriorate when novelty of the task increases.
EDIT: Had there been an ounce of actual true reasoning emerging in LLMs, openai would have been running this thing privatly 24/7 to produce new science and capture pattents that would give them economic dominance. Not trying to sell tokens to us all.