logoalt Hacker News

Speed up responses with fast mode

115 pointsby surprisetalkyesterday at 6:08 PM120 commentsview on HN

Comments

kristianpyesterday at 10:38 PM

This is gold for Anthropic's profitability. The Claude Code addicts can double their spend to plow through tokens because they need to finish something by a deadline. OpenAI will have a similar product within a week but will only charge 3x the normal rate.

This angle might also be NVidias reason for buying Groq. People will pay a premium for faster tokens.

show 1 reply
OtherShrezzingyesterday at 11:52 PM

A useful feature would be slow-mode which gets low cost compute on spot pricing.

I’ll often kick off a process at the end of my day, or over lunch. I don’t need it to run immediately. I’d be fine if it just ran on their next otherwise-idle gpu at much lower cost that the standard offering.

show 2 replies
zhyderyesterday at 11:25 PM

So 2.5x the speed at 6x the price [1].

Quite a premium for speed. Especially when Gemini 3 Pro is 1.8x the tokens/sec speed (of regular-speed Opus 4.6) at 0.45x the price [2]. Though it's worse at coding, and Gemini CLI doesn't have the agentic strength of Claude Code, yet.

[1] - https://x.com/claudeai/status/2020207322124132504 [2] - https://artificialanalysis.ai/leaderboards/models

Nitionyesterday at 7:27 PM

Note that you can't use this mode to get the most out of a subscription - they say it's always charged as extra usage:

> Fast mode usage is billed directly to extra usage, even if you have remaining usage on your plan. This means fast mode tokens do not count against your plan’s included usage and are charged at the fast mode rate from the first token.

Although if you visit the Usage screen right now, there's a deal you can claim for $50 free extra usage this month.

paxysyesterday at 8:02 PM

Looking at the "Decide when to use fast mode", it seems the future they want is:

- Long running autonomous agents and background tasks use regular processing.

- "Human in the loop" scenarios use fast mode.

Which makes perfect sense, but the question is - does the billing also make sense?

show 1 reply
legojoey17today at 1:15 AM

What's crazy is the pricing difference given that OpenAI recently reduced latency on some models with no price change - https://x.com/OpenAIDevs/status/2018838297221726482

AstroBentoday at 12:42 AM

This seems like an incredibly bad deal, but maybe they're probing to see if people will pay more

You know if people pay for this en masse it'll be the new default pricing, with fast being another step above

jawonyesterday at 9:41 PM

I was thinking about inhouse model inference speeds at frontier labs like Anthropic and OpenAI after reading the "Claude built a C compiler" article.

Having higher inference speed would be an advantage, especially if you're trying to eat all the software and services.

Anthropic offering 2.5x makes me assume they have 5x or 10x themselves.

In the predicted nightmare future where everything happens via agents negotiating with agents, the side with the most compute, and the fastest compute, is going to steamroll everyone.

show 4 replies
rustyhancockyesterday at 9:26 PM

At this point why don't we just CNAME HN to the Claude marketing blog?

show 2 replies
IMTDbyesterday at 7:28 PM

I’m curious what’s behind the speed improvements. It seems unlikely it’s just prioritization, so what else is changing? Is it new hardware (à la Groq or Cerebras)? That seems plausible, especially since it isn’t available on some cloud providers.

Also wondering whether we’ll soon see separate “speed” vs “cleverness” pricing on other LLM providers too.

show 6 replies
throwaway132448yesterday at 11:08 PM

Given how little most of us can know about the true cost of inference for these providers (and thus the financial sustainability of their services), this is an interesting signal. Not sure how to interpret it, but it doesn’t feel like it bodes well.

show 1 reply
digiownyesterday at 11:47 PM

I wouldn't be surprised if the implementation is

- Turn down the thinking token budget to one half

- Multiply the thinking tokens by 2 on the usage stats returned

- Phew! Twice the speed

IMO charging for the thinking tokens that you can't see is scam.

jhackyesterday at 7:50 PM

The pricing on this is absolutely nuts.

show 1 reply
clbrmbryesterday at 7:53 PM

I’d love to hear from engineers who find that faster speed is a big unlock for them.

The deadline piece is really interesting. I suppose there’s a lot of people now who are basically limited by how fast their agents can run and on very aggressive timelines with funders breathing down their necks?

show 5 replies
simonwyesterday at 7:18 PM

The one question I have that isn't answered by the page is how much faster?

Obviously they can't make promises but I'd still like a rough indication of how much this might improve the speed of responses.

show 3 replies
l5870uoo9yyesterday at 8:05 PM

It doesn’t say how much faster it is but from my experience with OpenAI’s “service_tier=priority” option on SQLAI.ai is that it’s twice as fast.

dmixyesterday at 10:07 PM

I really like Anthropic's web design. This doc site looks like it's using gitbook (or a clone of gitbook) but they make it look so nice.

show 2 replies
pronikyesterday at 7:26 PM

While it's an excellent way to make more money in the moment, I think this might become a standard no-extra-cost feature in several months (see Opus becoming way cheaper and a default model within months). Mental load management while using agents will become even more important it seems.

show 2 replies
1123581321yesterday at 7:07 PM

Could be a use for the $50 extra usage credit. It requires extra usage to be enabled.

> Fast mode usage is billed directly to extra usage, even if you have remaining usage on your plan. This means fast mode tokens do not count against your plan’s included usage and are charged at the fast mode rate from the first token.

show 2 replies
niobeyesterday at 9:23 PM

So fast mode uses more tokens, in direct opposition to Gemini where fast 'mode' means less. One more piece of useless knowledge to remember.

show 2 replies
solidasparagusyesterday at 7:30 PM

I pay $200 a month and don't get any included access to this? Ridiculous

show 3 replies
maz1byesterday at 7:57 PM

AFAIK, they don't have any deals or partnerships with Groq or Cerebras or any of those kinds of companies.. so how did they do this?

show 2 replies
pedropaulovcyesterday at 7:25 PM

Where is this perf gain coming from? Running on TPUs?

show 1 reply
krm01yesterday at 7:22 PM

Will this mean that when cost is more important than latency that replies will now take longer?

I’m not in favor of the ad model chatgpt proposes. But business models like these suffer from similar traps.

If it works for them, then the logical next step is to convert more to use fast mode. Which naturally means to slow things down for those that didn’t pick/pay for fast mode.

We’ve seen it with iPhones being slowed down to make the newer model seem faster.

Not saying it’ll happen. I love Claude. But these business models almost always invite dark patterns in order to move the bottom line.

esafakyesterday at 8:00 PM

It's a good way to address the price insensitive segment. As long as they don't slow down the rest, good move.

show 1 reply
simianwordsyesterday at 8:21 PM

Whatever optimisation is going on is at the hardware level since the fast option persists in a session.

laidoffamazontoday at 12:33 AM

Personally, I’d prefer a slow mode that’s a ton cheaper for a lot of things.

hmokiguessyesterday at 7:41 PM

Give me a slow mode that’s cheaper instead lol

thehamkercatyesterday at 6:39 PM

Interesting, output price is insane/Mtok

thisisauseridyesterday at 9:33 PM

Instead of better/cheaper/faster you just the the last one?

Back to Gemini.

jonplackettyesterday at 10:06 PM

Is this is the beginning of the ‘Speedy boarding’ / ‘Fastest delivery’ enshitification?

Where everyone is forced to pay for a speed up because the ‘normal’ service just gets slower and slower.

I hope not. But I fear.

show 1 reply
AnotherGoodNameyesterday at 9:31 PM

But waiting for the agent to finish is my 2026 equivalent of "compiling!"

https://xkcd.com/303/

speedpingyesterday at 7:23 PM

> $30/150 MTok Umm no thank you

henningyesterday at 10:13 PM

LLM programming is very easy. First you have to prompt it to not mistakes. Then you have to tell it to go fast. Software engineering is over bro, all humans will be replaced in 6 days bro

aabhayyesterday at 9:16 PM

What is “$30/150MTok”? Claude Opus 4.6 is normally priced at “$25/MTok”. Am I just reading it wrong or is this a typo?

EDIT: I understand now. $30 for input, $150 for output. Very confusing wording. That’s insanely expensive!

show 1 reply