Are employees from Anthropic botting this post now? This should be one of the top most voted posts in this website but it's nowhere on the first 3 pages.
Also remember, using claude to code might make the company you're working for richer. But you are forgetting your skills (seen it first hand), and you're not learning anything new. Professionally you are downgrading. Your next interview won't be testing your AI skills.
Yeah, the influx of people is disrupting my work, but it brings me joy to witness OpenAI’s decline in consumer support. So much for their Jonny Ive product, whatever it was.
I switched from OpenAI to Anthropic over the weekend due to the OpenAI fiasco.
I haven't been using the service long enough to comment on the quality of the responses/code generation, although the outages are really quite impactful.
I feel like half of my attempted times using Claude have been met with an Error or Outage, meanwhile the usage limits seem quite intense on Claude Code. I asked Claude to make a website to search a database. It took about 6 minutes for Claude to make it, meanwhile it used 60% of my 4h quota window. I wasn't able to re-find it past asking it to make some basic font changes until I became limited. Under 30 minutes and my entire 4 hour window was used up.
Meanwhile with ChatGPT Codex, a multi-hour coding session would still have 20%+ available at the end of the 4/5 hour window.
Emails with verification codes do not get delivered.
> Have a verification code instead?
> Enter the code generated from the link sent to [...]
> We are experiencing delivery issues with some email providers and are working to resolve this.
> Check your junk/spam and quarantine folders and ensure that [email protected] is on your allowed senders list.
I'm still waiting for a code from one hour ago. Meanwhile I managed to fix my source code alone, like twelve months ago.
Jarred (from Bun) said that a lot of the errors are being of how much they've scaled in users recently (i.e., the flock that came from OpenAI)
keeps going down. One more time and I'm moving to Codex. Or hell, I better go back to using my actual brain and coding, god forbid. Fml.
I hope they improve their incident response comms in the future. 2.5 hours with nothing more than "We are continuing to investigate this issue" is pretty poor form. Their past history of incident handling looks just as bad.
I was having an extended incognito chat with claude.ai, and then it stopped responding. I saved the transcript in a notepad and checked in another tab whether it was down. i wonder if the incognito session is gone, and whether by reposting it i can resurrect it. I have done so with Gemini but there it has codes like "Gemini said", which I do not see here. If anyone knows that, appreciate a solution.
I'm basing my next projects on the ability of Claude code to write code for me. This disruptions are scary.
This comes as reminder that software engineering is way more than generating code.
We build systems that can fail in unpredictable ways, and without knowing the system we built deeply is hard to understand what's going on.
Seems to be the biggest outage yet. Might be related to power loss events in UAE timing is suspicious as more datacenters appear to be hit.
Already made the switch back to Codex :-)
The service has been inconsistent and/or down for the last 12 hours..
They need to keep an emergency backup Claude to fix the production Claude when it goes down.
(More seriously I wonder if they'd consider using Openai or Gemini for this purpose)
This right now today is making the case for OSS AI and local inference. 200$/m to get rate limited makes a RTX 6000 Pro look cheap.
No wonder. It's performance overall was noticeably, like it had regressed to coding models from 1.5 years ago. I've try not to use claude during peak US hours because it tends to struggle more then with reasoning and correctness it seems than off hours.
I've been noticing elevated stupidity.
"Do this"
"User wants me to [do complete opposite]"
Seems not to be as capable as a month ago.
Anyone else find this timing odd given the DoD ban?
Who fixes the Ai when the Ai is down? Semi serious since they're pretty big on not writing code?
I won't hate you for downvoting me, but this is heroin-grade schadenfreude.
“98.92 % uptime” is horrendous and unacceptable.
Only one 9 of availability means you are seriously unreliable.
AI has normalized single 9's of availability, even for non-AI companies such as Github that have to rapidly adapt to AI aided scaleups in patterns of use. Understandably, because GPU capacity is pre-allocated months to years in advance, in large discrete chunks to either inference or training, with a modest buffer that exists mainly so you can cannibalize experimental research jobs during spikes. It's just not financially viable to have spades of reserve capacity. These days in particular when supply chains are already under great strain and we're starting to be bottlenecked on chip production. And if they got around it by serving a quantized or otherwise ablated model (a common strategy in some instances), all the new people would be disappointed and it would damage trust.
Less 9's are a reasonable tradeoff for the ability to ship AI to everyone I suppose. That's one way to prove the technology isn't reliable enough to be shipped into autonomous kill chains just yet lol.