logoalt Hacker News

Henchman21yesterday at 9:23 PM2 repliesview on HN

If they've let their AI write the policy, and then they repeat that as policy, how exactly is this an "LLM hallucination" and not a real policy?


Replies

teraflopyesterday at 9:31 PM

It's both, isn't it? If the AI writes the policy and is also responsible for enforcing it (by handling tickets and acting as a gatekeeper for which issues are escalated to humans who can do something about them), then the hallucination becomes real.

root_axisyesterday at 10:45 PM

It's the same thing. Whether it was hallucinated upstream or in situ, the point is that it's not a real policy that the business adheres to, just something the LLM spat out.

show 1 reply