"It is entirely jarring if an AI NPC that says something that's not consistent with the game state, or changes the game state in a way that violates the constraint of the understood rules/boundaries of the game"
I played with the Mantella mod for Skyrim a few months back and one of the problems with LLMs is you can't keep them on topic. I even used a custom trained one just for Skyrim, but the problem was it still has vast real world knowledge it shouldn't. For instance I asked a town guard where I could find Taylor Swift, who said she might be down at the tavern playing music. While the conversation didn't super overtly break the theme, the guard "stayed in character" and didn't start gushing about specific songs of hers or something, he still "knew" who she was. Current-gen AIs can't be fenced in very well. And almost every game idea needs some sort of fencing in.
If you play along with the AI it's not bad but if you poke the edges the illusion breaks quickly. You can't prompt a current-gen AI to just "forget everything you shouldn't know because it doesn't fit in the game universe."
I expect future architectures probably will fix this and that will help a lot. But we don't have them yet.