logoalt Hacker News

root_axisyesterday at 8:41 PM1 replyview on HN

More likely its just an LLM hallucination, not a real policy that Anthropic has. Unfortunately for them, it's a bad look to showcase one of the main failure modes of their product in their own business process.


Replies

Henchman21yesterday at 9:23 PM

If they've let their AI write the policy, and then they repeat that as policy, how exactly is this an "LLM hallucination" and not a real policy?

show 2 replies