logoalt Hacker News

DeepSeek-V4: Towards Highly Efficient Million-Token Context Intelligence

141 pointsby cmrdporcupinetoday at 3:07 AM13 commentsview on HN

Comments

anonzzziestoday at 4:06 AM

From this thread [0] I can assume that because, while 1.6T, it is A49B, it can run (theoretically, very slow maybe) locally on consumer hardeware, or is that wrong?

[0] https://news.ycombinator.com/item?id=47864835

show 2 replies
woeiruatoday at 3:13 AM

Hmm. Looks like DeepSeek is just about 2 months behind the leaders now.

show 1 reply
statementstoday at 4:48 AM

The quality of this model vs the price is an insane value deal.

show 1 reply
cmrdporcupinetoday at 3:13 AM

Pricing: https://api-docs.deepseek.com/quick_start/pricing

"Pro" $3.48 / 1M output tokens vs $4.40 for GLM 5.1 or $4.00 for Kimi K2.6

"Flash" is only $0.28 / 1M and seems quite competent

(EDIT: Note that if you hit the setting that opencode etc hit (deepseek-chat / deepseek-reasoner) for DeepSeek API, it appears to be "flash".)

show 2 replies
taosxtoday at 3:41 AM

So the R line (R2) is discontinued or folder back into v4 right?

show 2 replies