the model generates probabilities for the next token, then you set the probability of not allowed tokens to 0 before sampling (deterministically or probabilistically)
but filtering a particular token doesn't fix it even slightly, because it's a language model and it will understand word synonyms or references.
but filtering a particular token doesn't fix it even slightly, because it's a language model and it will understand word synonyms or references.