Except that a corrupt packet can easily be detected when compared to a valid packet (is the checksum valid?). There is an algorithm to execute that can tell you, with high confidence, whether a given packet is corrupt or not.
In an LLM a token is a token. There are no semantics to anything in there. In order to answer the question "is this a good answer or not?" you would need a model that somehow doesn't hallucinate, because the tokens themselves don't have any mathematical properties such that they can be manipulated. A "hallucinated" token cannot, in any mathematical way, be distinguished from one that "wasn't hallucinated". That's a big difference.
All of the above stuff you mentioned is mathematically proven to improve the desired performance target in a controlled, well understood way. We know their limitations, we know their strengths. They are backed by solid foundations, and can be relied upon.
This is not comparable to an LLM where the best you can do is just "pull more heuristics out of someone's ass and hope for the best"