It’s hard to believe this when the llm “knows” so much more then us yet still can not be creative outside its training distribution
The LLM doesn't 'know' more than us - it has compressed more patterns from text than any human could process. That's not the same as knowledge. And yes, the training algorithms deliberately skew the distribution to maintain coherent output - without that bias toward seen patterns, it would generate nonsense. That's precisely why it can't be creative outside its training distribution: the architecture is designed to prevent novel combinations that deviate too far from learned patterns. Coherence and genuine creativity are in tension here
When are we as humans creative outside our training data? It's very rare we actually discover something truly novel. This is often random, us stumbling onto it, brute force or purely by being at the right place at the right time.
On the other hand, until it's proven it'd likely be considered a hallucination. You need to test something before you can dismiss it. (They did burn witches for discoveries back in the day, deemed witchcraft). We also reduce randomness and pre-train to avoid overfitting.
Day to day human creative outputs as humans are actually less exciting when you think about it further, we build on pre-existing knowledge. No different to good prompt output with the right input. Humans are just more knowledgeable & smarter at the moment.