I recently found out that there's no such thing as Anthropic support. And that made me sad, but not for reasons that you expect.
Out of all of the tech organizations, frontier labs are the one org you'd expect to be trying out cutting edge forms of support. Out of all of the different things these agents can do, surely most forms of "routine" customer support are the lowest hanging fruit?
I think it's possible for Anthropic to make the kind of experience that delights customers. Service that feels magical. Claude is such an incredible breakthrough, and I would be very interested in seeing what Anthropic can do with Claude let loose.
I also think it's essential for the anthropic platform in the long-run. And not just in the obvious ways (customer loyalty etc). I don't know if anyone has brought this up at Anthropic, but it's such a huge risk for Anthropic's long-term strategic position. They're begging corporate decision makers to ask the question, "If Anthropic doesn't trust Claude to run its support, then why should we?"
I would say it is a strong sign, they do not trust their agent yet, to allow them significant buisness decisions, that a support agent would have to do. Reopening accounts, closing them, refunds, .. people would immediately start to try to exploit them. And will likely succeed.
There was that experiment run where an office gave Claude control of its vending machine ordering with… interesting results.
My assumption is that Claude isn’t used directly for customer service because:
1) it would be too suggestible in some cases
2) even in more usual circumstances it would be too reasonable (“yes, you’re right, that is bad performance, I’ll refund your yearly subscription”, etc.) and not act as the customer-unfriendly wall that customer service sometimes needs to be.
There is a discord, but I have not found it to be the friendliest of places.
At one point I observed a conversation which, to me, seemed to be a user attempting to communicate in a good faith manner who was given instructions that they clearly did not understand, and then were subsequently banned for not following the rules.
It seems now they have a policy of
Warning on First Offense → Ban on Second Offense
The following behaviors will result in a warning.
Continued violations will result in a permanent ban:
Disrespectful or dismissive comments toward other members
Personal attacks or heated arguments that cross the line
Minor rule violations (off-topic posting, light self-promotion)
Behavior that derails productive conversation
Unnecessary @-mentions of moderators or Anthropic staff
I'm not sure how many groups moderate in a manner that a second offence off-topic comment is worthy of a ban. It seems a little harsh. I'm not a fan of obviously subjective banable offences.I'm a little surprised that Anthropic hasn't fostered a more welcoming community. Everyone is learning this stuff new, together or not. There is plenty of opportunity for people to help each other.
Claude is an amazing coding model, its other abilities are middling. Anthropic's strategy seems to be to just focus on coding, and they do it well.
https://support.claude.com/en/articles/9015913-how-to-get-su...
Their support includes talking to Fin, their AI support with escalations to humans as needed. I dont use Claude and have never used the support bot, but their docs say they have support.
LLMs aren't really suitable for much of anything that can't already be done as self-service on a website.
These days, a human only gets involved when the business process wants to put some friction between the user and some action. An LLM can't really be trusted for this kind of stuff due to prompt injection and hallucinations.
Human attention will be the luxury product of the next decade.
Offering any support is setting expectations of receiving support.
If you don't offer support, reality meets expectations, which sucks, but not enough for the money machine to care.
> They're begging corporate decision makers to ask the question, "If Anthropic doesn't trust Claude to run its support, then why should we?"
Don't worry - I'm sure they won't and those stakeholders will feel confident in their enlightened decision to send their most frustrated customers through a chatbot that repeatedly asks them for detailed and irrelevant information and won't let them proceed to any other support levels until it is provided.
I, for one, welcome our new helpful overlords that have very reasonably asked me for my highschool transcript and a ten page paper on why I think the bug happened before letting me talk to a real person. That's efficiency.
Eh, I can see support simply not being worth any real effort, i.e. having nobody working on it full time.
I worked for a unicorn tech company where they determined that anyone with under 50,000 ARR was too unsophisticated to be worth offering support. Their emails were sent straight to the bin until they quit. The support queue was entirely for their psychological support/to buy a few months of extra revenue.
It didn't matter what their problems were. Supporting smaller people simply wasn't worth the effort statistically.
> I think it's possible for Anthropic to make the kind of experience that delights customers. Service that feels magical. Claude is such an incredible breakthrough, and I would be very interested in seeing what Anthropic can do with Claude let loose.
Are there enough people who need support that it matters?
> I recently found out that there's no such thing as Anthropic support.
The article discusses using Anthropic support. Without much satisfaction, but it seems like you "recently found out" something false.
> Out of all of the different things these agents can do, surely most forms of "routine" customer support are the lowest hanging fruit?
I come from a world where customer support is a significant expense for operations and everyone was SO excited to implement AI for this. It doesn't work particularly well and shows a profound gap between what people think working in customer service is like and how fucking hard it actually is.
Honestly, AI is better at replacing the cost of upper-middle management and executives than it is the customer service problems.