After actually using DeepSeek-V3 for a while, the difference betwen it and Sonnet 3.5 is just glaring. My conclusion is that the hype around DeepSeek is either from 1) people who use LLM a lot more than a programmer can reasonably does so they're very price sensitive, like repackage service providers 2) astroturf.
It's astroturf.
There's hype and there's hype. No, DeepSeek-V3 is not better than Sonnet. But it is drastically better than open-weights LLMs we've had before, so it is still a significant increase in "local AI power" - surely you can see why people are excited about that even if SOTA cloud models can still do better? I mean, even if it only just beats the original GPT-4 from two years ago, that still means that things are moving very fast.