This has been discussed ad nauseum in other places here over the years, but basically I don't think the conclusions in the post are correct.
It does not make a whole lot of sense to let an AI determine story, whether it be dialogue, changing the game state, or whatever. The reason basically being a few things - one, the fundamental aspect of a game is that there are rules and boundaries these rules need to remain consistent (and testable). It is entirely jarring if an AI NPC that says something that's not consistent with the game state, or changes the game state in a way that violates the constraint of the understood rules/boundaries of the game - this isn't fun to the user at all, even if it sounds cool. If you do not believe me, you can test this out trying to play DND with these things and see how annoying it can become. If you decide ok, I'm going to try to tightly bind what the AI can do or not with branching and rules, you are basically engineering the same way that games have engineered already, so why even use AI? It's a solution looking for a problem here.
The second big issue is determinism - LLM's are fundamentally non-deterministic. In most games, you would expect an action to have the same or at least very similar reaction most of the time - RNG comes into play naturally already in a lot of games in a predictable way (dice rolls, chance to hit, etc.) LLM's bring nothing new to the table here.
For world gen, we already have games like No Man's Sky deterministically generate quadrillions of worlds. What help is an LLM here? We already have the technology.
One area that would be interesting, is agentic bot players, but that sort of leads down the same path as the above arguments - there already are extremely sophisticated bots that play a huge variety of games. What do LLM's bring here?
"It is entirely jarring if an AI NPC that says something that's not consistent with the game state, or changes the game state in a way that violates the constraint of the understood rules/boundaries of the game"
I played with the Mantella mod for Skyrim a few months back and one of the problems with LLMs is you can't keep them on topic. I even used a custom trained one just for Skyrim, but the problem was it still has vast real world knowledge it shouldn't. For instance I asked a town guard where I could find Taylor Swift, who said she might be down at the tavern playing music. While the conversation didn't super overtly break the theme, the guard "stayed in character" and didn't start gushing about specific songs of hers or something, he still "knew" who she was. Current-gen AIs can't be fenced in very well. And almost every game idea needs some sort of fencing in.
If you play along with the AI it's not bad but if you poke the edges the illusion breaks quickly. You can't prompt a current-gen AI to just "forget everything you shouldn't know because it doesn't fit in the game universe."
I expect future architectures probably will fix this and that will help a lot. But we don't have them yet.