The most interesting is the realization that if the LLM's input is only the output of a professional (human), then by definition the LLM cannot mimic the process the (human) professional applied to get from whatever input they had to produce the output.
In other words an LLM can spit out a plausible "output of X", however it cannot encode the process that lead X to transform their inputs into their output.
Is it not possible for the process of input to output be inferred by the llm and therefore applied to new inputs to create appropriate outputs.
Replace "LLM" with "student" and read that again. You don't just blindly give students output, you teach them, like what you are supposed to do with an LLM.
i don't get what the point of what you are saying is? i can ask it to explain how to solve an integral right now with steps.
i can ask it to tell me how to write like a person X right now.