That's why we should strive to use and optimize local LLMs.
Or better yet, we should setup something that allows people to share a part of their local GPU processing (like SETI@home) for a distributed LLM that cannot be censored. And somehow be compensated when it's used for inference
The widespread agreement here seems to be that author is lying and deserves the ban.
Which actually bolsters your argument.
Usually people get a lot more sympathy when Massive Powerful Tech Company cuts them off without warning and they complain on HN.
Yeah we really have to strive not to rely on these corporations because they absolutely will not do customer support or actually review account closures. This article is also mentioning I assume Google, has control over a lot more than just AI.