logoalt Hacker News

swalshyesterday at 2:47 PM2 repliesview on HN

We just saw last week people are setting up moltbots with virtually no knowledge of what it has and doesn't have access. The scenario that i'm afraid of is China realizes the potential of this. They can add training to the models commonly used for assistants. They act normal, are helpful, everything you'd want a bot to do. But maybe once in a while it checks moltbook or some other endpoint China controls for a trigger word. When it sees that, it kicks into a completely different mode, maybe it writes a script to DDoS targets of interest, maybe it mines your email for useful information, maybe the user has credentials to some piece that is a critical component of an important supply chain. This is not a wild scenario, no new sci-fi technology would need to be invented. Everything to do it is available today, people are configuring it, and using it like this today. The part that I fear is if it is running locally, you can't just shut off API access and kill the threat. It's running on it's own server, it's own model. You have to cut off each node.

Big fan of AI, I use local models A LOT. I do think we have to take threats like this seriously. I don't Think it's a wild scifi idea. Since WW2, civilians have been as much of an equal opportunity target as a soldier, war is about logistics, and civilians supply the military.


Replies

restersyesterday at 4:00 PM

Fair point but I would be more worried about the US government doing this kind of thing to act against US citizens than the Chinese government doing it.

I think we're in a brief period of relative freedom where deep engineering topics can be discussed with AI agents even though they have potential uses in weapons systems. Imagine asking chat gpt how to build a fertilizer bomb, but apply the same censorship to anything related to computer vision, lasers, drone coordination, etc.

dttzeyesterday at 9:13 PM

[dead]