LLM or not, that seems to be an official response to a support request, where they clearly say "yes, we fucked up but now you fuck off", and it looks like the model was conditioned to produce these particular responses.
That may be true (and likely is), but it doesn't explain why that initial answer from Anthropic was "we can't" instead of the truth, which is "we can".
LLM or not, that seems to be an official response to a support request, where they clearly say "yes, we fucked up but now you fuck off", and it looks like the model was conditioned to produce these particular responses.