There is some ability for it to make novel connections but it's pretty small. You can see this yourself having it build novel systems.
It largely cannot imaginr anything beyond the usual but there is a small part that it can. This is similar to in context learning, it's weak but it is there.
It would be incredible if meta learning/continual learning found a way to train exactly for novel learning path. But that's literally AGI so maybe 20yrs from now? Or never..
You can see this on CL benchmarks. There is SOME signal but it's crazy low. When I was traing CL models i found that signal was in the single % points. Some could easily argue it was zero but I really do believe there is a very small amount in there.
This is also why any novel work or findings is done via MASSIVE compute budgets. They find RL enviroments that can extract that small amount out. Is it random chance? Maybe, hard to say.
Is this so different from what we see in humans? Most people do not think very creatively. They apply what they know in situations they are familiar with. In unfamiliar situations they don't know what to do and often fail to come up with novel solutions. Or maybe in areas where they are very experienced they will come up with something incrementally better than before. But occasionally a very exceptional person makes a profound connection or leap to a new understanding.