Considering that an LLM simply remixes what it finds in its learned distribution over text, it's possible that it can pose new math problems by identifying gaps ("obvious" in restrospect) that humans may have missed (like connecting two known problems to pose a new one). What LLMs can't currently do is pose new problems by observing the real world and its ramifications, like that moving sofa problem.