My understanding is that it's not that the _models_ are banned, but rather the _platform_ is banned. It is acceptable to host, say, `deepseek-r1-distill-qwen-7b` and run it yourself, for example. It is not acceptable (to the authors of these bans) to download the DeepSeek app and run it on your work device.
We aren't allowed to use any unauthorized models even locally.
I just left a job for a German B2B software company which sold primarily to large automotive, defense, and aerospace companies. Several of our customers specifically banned anything with the word "DeepSeek" -- hosted or self-hosted.
There's still a lot of naivety on what the difference is between models and platforms, and its easier for a lot of these big companies to just make a blanket statement like "nothing DeepSeek" than for their procurement teams to try to understand and negotiate with each vendor. They don't see the potential benefit over the potential risk of somebody misinterpreting or getting it wrong, so they outright ban it.
Most people that approve or buy software simply also just don't understand how models are being trained or if it's possible/how far a model could go to "introduce backdoors." A backdoor could be, from a business perspective, a model which has been trained to give answers that could hurt western business in a "strict text mode" or produces payloads in a programmatic mode that are intentionally trained to introduce software vulnerabilities.
Anyone can make arguments against these for a variety of reasons (looking at the transparency of both sides and comparing, etc) but for many reasons today and for better or worse, many Chinese models are being banned on big software contracts, which gets back to the title of the article