logoalt Hacker News

bayarearefugeeyesterday at 9:21 PM1 replyview on HN

I mostly use Gemini, so I can't speak for Claude, but Gemini definitely has variable quality at different times, though I've never bothered to try to find a specific time-of-day pattern to it.

The most reliable time to see it fall apart is when Google makes a public announcement that is likely to cause a sudden influx of people using it.

And there are multiple levels of failure, first you start seeing iffy responses of obvious lesser quality than usual and then if things get really bad you start seeing just random errors where Gemini will suddenly lose all of its context (even on a new chat) or just start failing at the UI level by not bothering to finish answers, etc.

The sort of obvious likely reason for this is when the models are under high load they probably engage in a type of dynamic load balancing where they fall back to lighter models or limit the amount of time/resources allowed for any particular prompt.


Replies

kevinsyncyesterday at 9:27 PM

I suspect they might transparently fall back too; Opus 4.5 has been really reasonable lately, except right after it launched, and also surrounding any service interruptions / problems reported on status.claude.ai -- once those issues resolve, for a few hours the results feel very "Sonnet", and it starts making a lot more mistakes. When that happens, I'll usually just pause Claude and prompt Codex and Gemini with the same issue to see what comes out of the black hole.. then a bit later, Claude mysteriously regains its wits.

I just assume it went to the bar, got wasted, and needed time to sober up!

show 3 replies