logoalt Hacker News

ben_wyesterday at 11:39 PM1 replyview on HN

> No, the real risk here is that this technology is going to be kept behind closed doors, and monopolized by the rich and powerful, while us scrubs will only get limited access to a lobotomized and heavily censored version of it, if at all.

Given the number of leaks, deliberate publications of weights, and worldwide competition, why do you believe this?

(Even if by "lobotomised" you mean "refuses to assist with CNB weapon development").

Also, you can have more than one failure mode both be true. A protest against direct local air polution from a coal plant is still valid even though the greenhouse effect exists, and vice versa.


Replies

kouteiheikatoday at 6:05 AM

> Given the number of leaks, deliberate publications of weights, and worldwide competition, why do you believe this?

So where can I find the leaked weights of GPT-3/GPT-4/GPT-5? Or Claude? Or Gemini?

The only weights we are getting are those which the people on the top decided we can get, and precisely because they're not SOTA.

If any of those companies stumbles upon true AGI (as unlikely as it is), you can bet it will be tightly controlled and normal people will either have an extremely limited access to it, or none at all.

> Even if by "lobotomised" you mean "refuses to assist with CNB weapon development"

Right, because people who design/manufacture weapons of mass destruction will surely use ChatGPT to do it. The same ChatGPT who routinely hallucinates widely incorrect details even for the most trifling queries. If anything, that'd only sabotage their efforts if they're stupid enough to use an LLM for that.

Nevertheless, it's always fun when you ask an LLM to translate something from another language, and the line you're trying to translate coincidentally contains some "unsafe" language, and your query gets deleted and you get a nice, red warning that "your request violates our terms and conditions". Ah, yes, I'm feeling "safe" already.