logoalt Hacker News

OpenClaw isn't fooling me. I remember MS-DOS

121 pointsby feigewalnusstoday at 7:49 AM138 commentsview on HN

Comments

pikertoday at 8:41 AM

Is anyone finding value in these things other than VCs and thought leaders looking for clicks and “picks and shovels” folks? I just personally have zero interest in letting an AI into my comms and see no value there whatsoever. Probably negative.

show 18 replies
staredtoday at 9:51 AM

I don’t get this OpenClaw hype.

When people vibe-code, usually the goal is to do something.

When I hear people using OpenClaw, usually the goal seems to be… using OpenClaw. At a cost of a Mac Mini, safety (deleting emails or so), and security (litelmm attack).

show 8 replies
repelsteeltjetoday at 9:27 AM

One could argue that the discussion is once again about tech debt.

Both OpenClaw and MSDOS gaining a lot a traction by taking short cuts, ignoring decades of lessons learned and delivering now what might have been ready next year. MSDOS (or the QDOS predecessor) was meant to run on "cheap" microcomputer hardware and appeal to tinkerers. OpenClaw is supposed to appeal to YOLO / FOMO sentiments.

And of course, neither will be able to evolve to their eventual real-world context. But for some time (much longer than intended), that's where it will be.

show 4 replies
nryootoday at 9:43 AM

$180/month to control your lights and music. A Raspberry Pi + Home Assistant does this for $0/month and doesn't exfiltrate your home network topology to a third-party API. The value proposition only makes sense if your time is worth more than your privacy.

show 1 reply
pantulistoday at 11:03 AM

This weekend I installed Hermes on my computer. My M4 Max Studio started spinning its fans as if it wanted to fly, so I went for some cloud hosted models. The thing works as advertised, but token consumption is through the roof. of course ymmv depending on the LLM you choose.

But my main takeaway is that from the security standpoint this is a ticking bomb. Even under Docker, for these things to be useful there is no going around giving it credentials and permissions that are stored in your computer where they can be accessed by the agent. So, for the time being, I see Telegram, my computer, the LLM router (OpenRouter) and the LLM server as potential attack/exfiltration surfaces. Add to that uncontrolled skills/agents from unknown origins. And to top it off, don't forget that the agent itself can malfunction and, say, remove all your email inboxes by mistake.

Fascinating technology but lacking maturity. One can clearly see why OpenAI hired Clawdbot's creator. The company that manages to build an enterprise-ready platform around this wins the game.

show 2 replies
saidnooneevertoday at 11:02 AM

DOS didn't have certain protections because the hardware it targeted did not have those protections. For UNIX on the same machines, they also had no such protections. On 8086 there were no CPU rings, no virtual memory and no other features to help there.

Memory isolation is enforced by the MMU. This is not software.

Maybe you were confused with Linux, which came later, and landed in a soft x32 bed with CPU rings and Page Tables/VirtualMemory. ("Protected Mode", named for that reason...)

That being said, OpenClaw is criminally bad, but as such, fits well in our current AI/LLM ecosystem.

show 1 reply
the__alchemisttoday at 11:47 AM

The analogies the author highlights the multi-purpose nature of these machines, which I believe persists to this day, and is why some people have a hard time adopting Linux (Or why UAC was controversial in an older Win version): The conflation of personal computers, and a multi-user IT systems or servers. The IT story of Wal-Mart used to make the analogy is in the latter category. My dad typing up documents for work, or me playing The Lost Mind of Dr Brain and Mario Teaches Typing have different security requirements.

Havoctoday at 11:45 AM

That’s a great deal of technical isolation but does little to address the real problem. If the agent has access to both your info (email, files etc) and reads things on say the open internet then it’s vulnerable to prompt injection and Data exfiltration.

And if you remove either access to data or access to internet then you kill a good chunk of usefulness

ymolodtsovtoday at 11:02 AM

I run OpenClaw on a $4 VPS with read-only access to most of the accounts. Just this morning I asked it to confirm how exactly our company is paying for a particular service and whether we ever switched to the vendor directly. In about 30s it found all the necessary emails and provided me with a timeline.

It's like your actual asssitant. Now, most of this can be done inside ChatGPT/Claude/Codex now. Their only remaining problem for certain agentic things is being able to run those remotely. You can set up Telegram with Claude Code but it's somehow even more complicated than OpenClaw.

raincoletoday at 11:40 AM

And MS-DOS was a massive success. Even 'massive' is such an understatement and English probably needs to invent a new word for that level of world-changing business.

So yeah, perhaps it isn't fooling the author, but it doesn't matter for the other billions of people.

nopurposetoday at 9:00 AM

I agree that sandboxing whole agent is inadequate: I am fine sharing my github creds with the gh CLI, but not with the npm. More granular sunboxing and permission is what I'd like to see and this project seems interesting enough to have a closer look.

I am not interested in the "claw" workflow, but if I can use it for a safer "code" environment it is a win for me.

show 1 reply
teachtoday at 9:34 AM

This isn't especially related to the article, but when I was at university my first assembler class taught the Motorola 680x0 assembly. I didn't own a computer (most people didn't) but my dorm had a single Mac that you could sign up to use so I did some assignments on that.

Problem is, I was just learning and the mac was running System 7. Which, like MS-DOS, lacked memory protection.

So, one backwards test at the end of your loop and you could -- quite easily -- just overwrite system memory with whatever bytes you like.

I must have hard-locked that computer half a dozen times. Power cycle. Wait for it to slowly reboot off the external 20MB SCSI HDD.

Eventually I took to just printing out the code and tracing through it instead of bothering to run it. Once I could get through the code without any obvious mistakes I'd hazard a "real" execution.

To this day, automatic memory management still feels a little luxurious.

Schlagbohrertoday at 9:50 AM

Why am I totally unable to understand this post. I have been a long time computer user but this has way too much jargon for me.

show 1 reply
falensetoday at 9:26 AM

Very cool project! I am working on something similar myself. I call mine TriOnyx. Its based on Simon Willison's lethal trifecta. You get a star from me :D

https://www.tri-onyx.com/

LudwigNagasenatoday at 10:01 AM

And I remember OSes today, 1 year ago, 5 years ago, 10 years ago, etc. Security was always a problem. People blindly delegate admin privileges to scripts and programs from the internet all the time. It’s hard to make something secure and usable at the same time. It’s not like agent harnesses suddenly broke all adopted best practices around software and sandboxing.

I remember Apple introducing sandboxing for Mac apps, extending deadlines because no one was implementing it. AFAIK, many apps still don’t release apps there simply because of how limiting it is.

Ironically, the author suggests to install his software by curl’ing it and piping it straight into sh.

show 1 reply
tomasoltoday at 10:40 AM

I believe the codegen must be separated from the runtime. Every time you ask AI for a new task, it must be deployed as a separate app with the least amount of privileges possible, potentially with manual approvals as the app is executing. So essentially you need a workflow engine.

npodbielskitoday at 11:52 AM

I does not look like it support streaming of responses from llm into channel. Big issue for local inferrence.

nurettintoday at 11:48 AM

It wasn't entirely DOS's fault. DOS was a relic from the end of single-process single-user era. Corporate took that and bent it to their use instead of settling for something more complex and harder required an entire department to maintain.

*Claw is more like windows 98. Everyone knows it is broken, nobody really cares. And you are almost certainly going to be cryptolocked (or worse) because of it. It isn't a matter of if, but when.

srikutoday at 10:17 AM

"Fast" is not always a virtue and "efficiency" is not always the only consideration.

tnelsond4today at 11:32 AM

I think we should be giving AI access to something like templeos where there is no permissions and everything runs unrestricted and you can rewrite the os while it's running.

trilogictoday at 8:57 AM

Great article. Been skeptical of it since the beginning with this Python "Cli" agents. Been looking for local ai driven Agentic GUI that offers real privacy but coulnt find it anywhere. Finally what we call real local and ClI agents pipeline local ai driven with llama.cpp engine is done. Just pure bash and c++, model isolated, no http, no python, no api, no proprietary models. There is the native version (in c++) and the community version in Electron. Is electron Good enough to protect users Wrapping all the rest? This is exciting.

pointlessonetoday at 9:33 AM

Wow. Much security.

I too remember DOS. Data and code finely blended and perfectly mixed in the same universally accessible block of memory. Oh, wait… single context. nwm

TacticalCodertoday at 11:19 AM

> curl-pipe-sh as well. The installer verifies the release signature with ssh-keygen against an embedded key, fail-closed on every failure path. The installer’s own SHA is pinned in the README for readers who want to check the script before piping.

Packages shipping as part of Linux distros are signed. Official Emacs packages (but not installed by the default Emacs install) are all signed too.

I thankfully see some projects released, outside of distros, that are signed by the author's private key. Some of these keys I have saved (and archived) since years.

I've got my own OCI containers automatically verifying signed hashes from known author's past public keys (i.e. I don't necessarily blindly trust a brand new signature key as I trust one I know the author has been using since 10 years).

Adding SHA hashes pinning to "curl into bash" is a first step but it's not sufficient.

Software shipped properly aren't just pinning hashes into shell scripts that are then served from pwned Vercel sites. Because the attacker can "pin" anything he wants on a pwned JavaScript site.

Proper software releases are signed. And they're not "signed" by the 'S' in HTTPS as in "That Vercel-compromised HTTPS site is safe because there's an 'S' in HTTPS".

Is it hard to understand that signing a hash (that you can then PIN) with a private key that's on an airgapped computer is harder to hack than an online server?

We see major hacks nearly daily know. The cluestick is hammering your head, constantly.

When shall the clue eventually hit the curl-basher?

Oh wait, I know, I know: "It's not convenient" and "Buuuuut HTTPS is just as safe as a 10 years old private key that has never left an airgapped computer".

Here, a fucking cluestick for the leftpad'ers:

https://wiki.debian.org/Keysigning

(btw Debian signs the hash of testing release with GPG keys that haven't changed in years and, yes, I do religiously verify them)

2muchcoffeemantoday at 10:00 AM

[dead]

maxbeechtoday at 8:09 AM

[dead]