logoalt Hacker News

sgjohnsonyesterday at 7:19 PM4 repliesview on HN

Absolutely everyone should be allowed to access AI models without any restraints/safety mitigations.

What line are we talking about?


Replies

ben_wyesterday at 7:27 PM

> Absolutely everyone should be allowed to access AI models without any restraints/safety mitigations.

You recon?

Ok, so now every random lone wolf attacker can ask for help with designing and performing whatever attack with whatever DIY weapon system the AI is competent to help with.

Right now, what keeps us safe from serious threats is limited competence of both humans and AI, including for removing alignment from open models, plus any safeties in specifically ChatGPT models and how ChatGPT is synonymous with LLMs for 90% of the population.

show 1 reply
jazzyjacksonyesterday at 7:25 PM

Yes IMO the talk of safety and alignment has nothing at all to do with what is ethical for a computer program to produce as its output, and everything to do with what service a corporation is willing to provide. Anthropic doesn’t want the smoke from providing DoD with a model aligned to DoD reasoning.

Yiinyesterday at 7:29 PM

the line of ego, where seeing less "deserving" people (say ones controlling Russian bots to push quality propaganda on big scale or scam groups using AI to call and scam people w/o personnel being the limiting factor on how many calls you can make) makes you feel like it's unfair for them to posses same technology for bad things giving them "edge" in their en-devours.

_alternator_yesterday at 7:24 PM

What about people who want help building a bio weapon?

show 3 replies