That's the point, isn't it? The missing link. AIs can't yet truly comprehend, or internalize, or whatever you want to call it. That's probably equivalent to AGI or singularity. We're not there yet. Feeding copious amounts of data into existing architecture won't get us there either.
A human with all that data, if it could fit in their brain, would likely come up with something interesting. Even then... I'm not entirely sure it's so simple. I'd wager most of us have enough knowledge in our brains today to come up with something if we applied ourselves, but ideas don't spontaneously appear just because the knowledge is there.
What if we take our AI models and force them to continuously try making connections between unlikely things? The novel stuff is likely in the parts that don't already have strong connections because research is lacking but could. But how would it evaluate what's interesting?