It can be super hard to moderate before an image is generated though. People can write in cryptic language and then say decode this message and generate an image the result, etc. The downside of LLMs is that they are super hard to moderate because they will gladly arbitrarily encode input and output. You need to use an LLM as advanced as the one you are running in production to actually check if they are obscene.