You wouldn't say this about a message encrypted with AES though, since there's not just a "human patience" limit but also a (we are pretty sure) unbearable computational cost.
We don't know, but it's completely plausible that we might find that the cost of analyzing LLMs in their current form, to the point of removing all doubt about how/what they are thinking, is also unbearably high.
We also might find that it's possible for us (or for an LLM training process itself) to encrypt LLM weights in such a way that the only way to know anything about what it knows is to ask it.