People need to understand a few things: vague questions make the models roam endlessly “exploring” dead ends. “Restarting” old chats immediately eats a lot of context. Anthropic CAN change their limits and rates as they see fit, there’s never been hard promises or SLOs on these plans.
With that said, I pay the Pro subscription (20/mo) and I hit limits maybe 2/3 times over a period of 4 months building a simple running app in Python. I’d not call it production ready but it’s not nothing either.
If people were considerably more willing to aggressively prune their context and scope tasks well, they could get a lot more done with it, at least in my experience. Anthropic can’t really fix anything because the underlying model architecture can’t be “patched”. But I definitely feel a lot of people can’t wrap their heads around the new paradigms needed to effectively prompt these models.
Additionally, opting out is always an option… but these types of issues feel more like laziness than real, structural issues with the model/harness…
> Anthropic CAN change their limits and rates as they see fit, there’s never been hard promises or SLOs on these plans.
No they can't. When I buy an annual subscription and prepay for the year, they can't just go "ok now you get one token a month" a day in. I bought the plan as I bought it. They can't change anything until the next renewal.
They rolled out 1M context then they start doing this shit? I know Pro doesn't have access to the 1M context but what a joke.
This is a copypasta right? I'm damn confident I have read the same content before.