More likely its just an LLM hallucination, not a real policy that Anthropic has. Unfortunately for them, it's a bad look to showcase one of the main failure modes of their product in their own business process.
If they've let their AI write the policy, and then they repeat that as policy, how exactly is this an "LLM hallucination" and not a real policy?
If they've let their AI write the policy, and then they repeat that as policy, how exactly is this an "LLM hallucination" and not a real policy?