logoalt Hacker News

SkyPunchertoday at 2:25 PM7 repliesview on HN

I skimmed the issue. No wonder Anthropic closes these tickets out without much action. That’s just a wall of AI garbage.

Here’s what I’ve done to mostly fix my usage issues:

* Turn on max thinking on every session. It save tokens overall because I’m not correcting it of having it waste energy on dead paths.

* keep active sessions active. It seems like caches are expiring after ~5 minutes (especially during peak usage). When the caches expire it sees like all tokens need to be rebuilt this gets especially bad as token usage goes up.

* compact after 200k tokens as soon as I reasonably can. I have no data but my usage absolutely sky rockets as I get into longer sessions. This is the most frustrating thing because Anthropic forced the 1M model on everyone.


Replies

losvedirtoday at 3:16 PM

Haha. yeah my eyes glazed over immediately on the issue. Absolutely this was someone telling their Claude Code to investigate why they ran out of tokens and open the issue.

Good chance it's not real or misdiagnosed. But it gives me some degree of schadenfreude to see it happening to the Claude Code repo.

show 1 reply
Chaosvextoday at 2:58 PM

I love how some comments tell you to turn max thinking on and others tell you to turn thinking off entirely. Apparently, they both save tokens!

Vibes, indeed.

himata4113today at 2:45 PM

The problem is actually because their cache invalidates randomly so that's why replaying inputs at 200k+ and above sucks up all usage. This is a bug within their systems that they refuse to acknowledge. My guess is that API clients kick off subscription users cache early which explains this behavior, if so then it's a feature not a bug.

They also silently raised the usage input tokens consume so it's a double whammi.

stldevtoday at 3:03 PM

Can confirm. Max effort helps; limiting context <= ~20-25% is crucial anymore.

> * keep active sessions active. It seems like caches are expiring after ~5 minutes (especially during peak usage). When the caches expire it sees like all tokens need to be rebuilt this gets especially bad as token usage goes up.

Is this as opaque on their end as it sounds, or is there a way to check?

ayhanfuattoday at 2:55 PM

> * Turn on max thinking on every session. It save tokens overall because I’m not correcting it of having it waste energy on dead paths.

This is definitely true. Ever since I realized there is an /effort max option I am no longer fighting it that much and wasting hours.

coderbantstoday at 2:29 PM

Can’t you turn the 1M off with a /model opus (or /model sonnet)?

At least up until recently the 1M model was separated into /model opus[1M]

show 2 replies
hartatortoday at 2:28 PM

Everything starts to feel like AI slop these days. Including this comment.