While following OpenClaw, I noticed an unexpected resentment in myself. After some introspection, I realized it’s tied to seeing a project achieve huge success while ignoring security norms many of us struggled to learn the hard way. On one level, it’s selfish discomfort at the feeling of being left behind (“I still can’t bring myself to vibe code. I have to at least skim every diff. Meanwhile this guy is joining OpenAI”). On another level, it feels genuinely sad that the culture of enforcing security norms - work that has no direct personal reward and that end users will never consciously appreciate, but that only builders can uphold - seems to be on it’s way out
But the security risk wasnt taken by OpenClaw. Releasing vulnerable software that users run on their own machines isn't going to compromise OpenClaw itself. It can still deliver value for it's users while also requiring those same users to handle the insecurity of the software themselves (by either ignoring it or setting up sandboxes, etc to reduce the risk, and then maybe that reduced risk is weighed against the novelty and value of the software that then makes it worth it to the user to setup).
On the other hand, if OpenClaw were structured as a SaaS, this entire project would have burned to the ground the first day it was launched.
So by releasing it as something you needed to run on your own hardware, the security requirement was reduced from essential, to a feature that some users would be happy to live without. If you were developing a competitor, security could be one feature you compete on--and it would increase the number of people willing to run your software and reduce the friction of setting up sandboxes/VMs to run it.
Every single new tech industry thing has to learn security from scratch. It's always been that way. A significant number of people in tech just don't believe that there's anything to learn from history.
For my entire career in tech (~20 years) I have been technically good but bad at identifying business trends. I left Shopify right before their stock 4xed during COVID because their technology was stagnating and the culture was toxic. The market didn't care about any of that, I could have hung around and been a millionaire. I've been at 3 early stage startups and the difference between winners and losers was nothing to do with quality or security.
The tech industry hasn't ever been about "building" in a pure sense, and I think we look back at previous generations with an excess of nostalgia. Many superior technologies have lost out because they were less profitable or marketed poorly.
> seems to be on it’s way out
Change is fraught with chaos. I don't think exuberant trends are indicators of whether we'll still care about secure and high quality software in the long term. My bet is that we will.
> being left behind (“I still can’t bring myself to vibe code. I have to at least skim every diff. Meanwhile this guy is joining OpenAI”).
I don't believe skimming diffs counts as being left behind. Survivor bias etc. Furthermore, people are going to get burned by this (already have been, but seemingly not enough) and a responsible mindset such as yours will be valued again.
Something that still up for grabs is figuring how how to do full agenetic in a responsible way. How do we bring the equivalent of skimming diffs to this?
i think your self reflection here is commendable. i agree on both counts.
i think the silver lining is that AI seems to be genuinely good at finding security issues and maybe further down the line enough to rely on it somewhat. the middle period we're entering right now is super scary.
we want all the value, security be damned, and have no way to know about issues we're introducing at this breakneck speed.
still i'm hopeful we can figure it out somehow
I don't know. It's more of a sharp tool like a web browser (also called a "user agent") - yes an inexperienced user can quickly get themselves into trouble without realizing it (in a browser or openclaw), yes the agent means it might even happen without you being there.
A security hole in a browser is an expected invariant not being upheld, like a vulnerability letting a remote attacker control your other programs, but it isn't a bug when a user falls for an online scam. What invariants are expected by anyone of "YOLO hey computer run my life for me thx"?
But in this case following security norms would be a mistake. The right thing to take away is that you shouldn't dogmatically follow norms. Sometimes it's better to just build things if there is very little risk
Nothing actually bad happened in this case and probably never will. Maybe some people have their crypto or identity stolen, but probably not a rate rate significantly higher than background (lots of people are using openclaw)
building this openclaw thing that competes with openai using codex is against the openai terms of service, which say you can't use it to make stuff that competes with them. but they compete with everyone. by giving zero fucks (or just not reading the fine print), bro was rewarded by the dumb rule people for breaking the dumb rules. this happens over and over. there is a lesson here
So my unsubstantiated conspiracy theory regarding Clawd/Molt/OpenClaw is that the hype was bought, probably by OpenAI. I find it too convenient that not long after the phrase “the AI bubble“ starts coming into common speech we see the emergence of a “viral” use case that all of the paid influencers on the Internet seem to converge on at the same time. At the end of the day piping AI output with tool access into a while loop is not revolutionary. The people who had been experimenting with these type of set ups back when LangChain was the hotness didn’t organically go viral because most people knew that giving a language model unrestricted access to your online presence or bank account is extremely reckless. The “I gave OpenClaw $100 and now I bought my second Lambo. Buy my ebook” stories don’t seem credible.
So don’t feel bad. Everything on the internet is fake.
Well OpenClaw has ~3k open PRs (many touching security) on GitHub right now. Peter's move shows killer product UI/UX, ease of use and user growth trump everything. Now OpenAI with throw their full engineering firepower to squash those flaws in no time.
Making users happy > perfect security day one
Hey, as a security engineer in AI, I get where you're coming from.
But one thing to remember - our job is to figure out how to enable these amazing usecases while keeping the blast radius as low as possible.
Yes, OpenClaw ignores all security norms, but it's our job to figure out an architecture in which agents like these can have the autonomy they need to act, without harming the business too much.
So I would disagree our work is "on the way out", it's more valuable than ever. I feel blessed to be working in security in this era - there has never been a better time to be in security. Every business needs us to get these things working safely, lest they fall behind.
It's fulfilling work, because we are no longer a cost center. And these businesses are willing to pay - truly life changing money for security engineers in our niche.
Security always is the most time consuming in a backend project
This is a normal reaction to unfairness. You see someone who you believe is Doing It Wrong (and I’d agree), and they’re rewarded for it. Meanwhile you Do It Right and your reward isn’t nearly as much. It’s natural to find this upsetting.
Unfortunately, you just have to understand that this happens all over the place, and all you can really do is try to make your corner of the world a little better. We can’t make programmers use good security practices. We can’t make users demand secure software. We can at least try to do a better job with our own work, and educate people on why they should care.
At the end of the day, he built something people want. That’s what really matters. OpenAI and Anthropic could not build it because of the security issues you point out. But people are using it and there is a need for it. Good on him for recognizing this and giving people what they want. We’re all adults and the users will be responsible for whatever issues they run into because of the lack of security around this project.
I've been feeling this SO much lately, in many ways. In addition to security, just the feeling of spending decades learning to write clean code, valuing having a deep understanding of my codebase and tooling, thorough testing, maintainability, etc, etc. Now the industry is basically telling me "all that expertise is pointless, you should give it up, all that we care about it is a future of endless AI slop that nobody understands".
[dead]
I think you should give your gut instinct more credit. The tech world has gotten a false sense of security from the big SaaS platforms running everything that make the nitty gritty security details disappear in a seamless user experience, and that includes LLM chatbot providers. Even open source development libraries with exposure to the wild are so heavily scrutinized and well-honed that it’s easy even for people like me that started in the 90s to lose sight of the real risk on the other side that. No more popping up some raw script on an Apache server to do its best against whatever is out there. Vibe coded projects trade a lot of that hard-won stability for the convenience of not having to consider some amount of the implementation details. People that are jumping all over this for anything except sandbox usage either don’t know any better, or forgot what they’ve learned.