Main discussion: https://news.ycombinator.com/item?id=47884971
Hmm. Looks like DeepSeek is just about 2 months behind the leaders now.
The quality of this model vs the price is an insane value deal.
Pricing: https://api-docs.deepseek.com/quick_start/pricing
"Pro" $3.48 / 1M output tokens vs $4.40 for GLM 5.1 or $4.00 for Kimi K2.6
"Flash" is only $0.28 / 1M and seems quite competent
(EDIT: Note that if you hit the setting that opencode etc hit (deepseek-chat / deepseek-reasoner) for DeepSeek API, it appears to be "flash".)
So the R line (R2) is discontinued or folder back into v4 right?
From this thread [0] I can assume that because, while 1.6T, it is A49B, it can run (theoretically, very slow maybe) locally on consumer hardeware, or is that wrong?
[0] https://news.ycombinator.com/item?id=47864835