I don't believe that to be possible in general. Because we've already had Millenia of philosophers attempting to make discoveries through sheer reasoning and with the small in the grand scheme of things exception of formal logic failed to do so. Which leads me to a principle: No matter how smart you are, you still need the real world as a reference.
Once again LLMs will have to be bound to a source of entropy or feedback of some sort as a limit. Sure you might be able to throw terawatts of cycles at say music production but without examples of what people already like or test audiences you cannot answer the question of whether it is any good.
It's proven to be possible in narrow areas like Go. There is no entropy or feedback or whatever. It just keeps getting better.
Well, yes, that's why the rest of science was invented, no? I did not mean to imply that AI would restrict itself to philoshical thinking and formal logic.