LLM hallucinations aren't errors.
LLMs generate text based on weights in a model, and some of it happens to be correct statements about the world. Doesn't mean the rest is generated incorrectly.
You know the difference between verification and validation?
You're describing a lack of errors in verification (working as designed/built, equations correct).
GP is describing an error in validation (not doing what we want / require / expect).
You know the difference between verification and validation?
You're describing a lack of errors in verification (working as designed/built, equations correct).
GP is describing an error in validation (not doing what we want / require / expect).