logoalt Hacker News

1-Click RCE to steal your Moltbot data and keys

152 pointsby arwtyesterday at 7:47 PM68 commentsview on HN

Comments

decodebytesyesterday at 9:26 PM

I rushed out nono.sh (the opposite of yolo!) in response to this and its already negated a few gateway attacks.

It uses kernel-level security primitives (Landlock on Linux, Seatbelt on macOS) to create sandboxes where unauthorized operations are structurally impossible. API keys are also stored in apples secure enclave (or the kernel keyring in linux) , and injected at run time and zeroized from memory after use. There is also some blocking of destructive actions (rm -rf ~/)

its as simple to run as: nono run --profile openclaw -- openclaw gateway

You can also use it to sandbox things like npm install:

nono run --allow node_modules --allow-file package.json package.lock npm install pkg

Its early in, there will be bugs! PR's welcome and all that!

https://nono.sh

show 4 replies
overgardyesterday at 9:21 PM

I'm curious, outside of AI enthusiasts have people found value with using Clawdbot, and if so, what are they doing with it? From my perspective it seems like the people legitimately busy enough that they actually need an AI assistant are also people with enough responsibilities that they have to be very careful about letting something act on their behalf with minimal supervision. It seems like that sort of person could probably afford to hire an administrative assistant anyway (a trustworthy one), or if it's for work they probably already have one.

On the other hand, the people most inclined to hand over access to everything to this bot also strike me as people without a lot to lose? I don't want to make an unfair characterization or anything, it just strikes me that handing over the keys to your entire life/identity is a lot more palatable if you don't have much to lose anyway?

Am I missing something?

show 7 replies
mentalgearyesterday at 9:18 PM

Moltbot is a security nightmare, especially it's premise (tap into all your data sources) and the rapid uptake by inexperienced users makes it especially attractive for criminal networks.

show 4 replies
ethinyesterday at 9:28 PM

Things like this are why I don't use AI agents like moltbot/openclaw. Security is just out the window with these things. It's like the last 50 years never happened.

show 3 replies
dotancohenyesterday at 9:07 PM

The real problem is that there is nothing novel here. Variants of this type of attack were clear from the beginning.

show 1 reply
brutus1213yesterday at 11:56 PM

Apart from the actual exploit, it is intriguing to see how a security researcher can leverage an AI tool to give them an asymmetric advantage to the actual developers of the code. Devs are pretty focused on their own subsystem and it would take serendipity or a ton of experience to be able to spot such patterns.

Thinking about this more .. given all the AI generated code being put into production these days (I routinely see posts of anthropic and others boast how much code is being written by AI). I can see it being much, much harder to review all the code being written by AIs. It makes a lot of sense to use an AI system to find vulnerabilities that humans don't have time to catch.

show 2 replies
bmityesterday at 9:23 PM

So many people are giving keys to the kingdom to this thing. What is happening with humanity?

show 1 reply
vulnwrecker5000yesterday at 9:33 PM

what worries me here is that the entire personal AI agent product category is built on the premise of “connect me to all your data + give me execution.” At that point, the question isn’t “did they patch this RCE,” it’s more about what does a secure autonomous agent deployment even look like when its main feature is broad authority over all of someone's connected data?

Is the only real answer sandboxing + zero trust + treating agents as hostile by default? Or is this category fundamentally incompatible with least privilege?

yikes

show 3 replies
ejchoyesterday at 9:40 PM

do people even care about security anymore? I'll bet many consumers wouldn't even think twice about just giving full access to this thing (or any other flavor of the month AI agent product)

clawsyndicateyesterday at 9:15 PM

legit issue for local installs but this is why we run the hosted platform in gVisor. even with the exploit you're trapped in a sandbox with no access to the host node. we treat every container as hostile by default.

show 3 replies
nsm100yesterday at 9:25 PM

Thank you for doing this. I'm shocked that more people aren't thinking about security with respect to AI.

show 2 replies
TacticalCodertoday at 12:50 AM

What I find really amazing is that the same ones who kept saying that cars were/are wasteful and that kept making fun of cryptocurrencies and complaining about the high energy usage to mine Bitcoin are now head first spending $$$ on the most energy intensive endeavour the human race ever invented: AI.

I mean: there are literally people spending $200 and more per month to have their personal, a bit schizophrenic, assistant engage moreover in conspicuous consumption for them.

Now as to my take on it: I think energy, when it comes to 8 billion humans, is basically infinite so I think it's only a matter of converting enough of that energy that either is or reaches our planet into a usable form. So I don't mind energy consumption.

But it'd be nice if could we at least have those who use AI not being hypocrites and stop criticizing Bitcoin mining and ICE cars? (by ICE I mean "Internal Combustion Engine" in case you thought I was talking about other kind of cars)

From now on you're only allowed to criticize ICE cars and Bitcoin mining if you don't use AI.