> simply referencing established knowledge would ever get the correct answer to novel problems, absent any understanding of that knowledge.
What is a concrete example of this?
What problems have LLMs (so models like ChatGPT, Claude, Gemini, etc, not specific purpose algorithms like MCTS tuned by humans for certain tasks like AlphaGo or AlphaFold) solved that thousands of humans worked decades on and didn't solve (so as OP said, novel)? Can you name 1-3 of them?
Coding seems like the most prominent example.