Wild. There are 300 open Github issues. One of them is this (also AI generated) security report: https://github.com/clawdbot/clawdbot/issues/1796 claiming findings of hundreds of high-risk issues, including examples of hard coded, unencrypted OAuth credentials.
I am...disinclined to install this software.
Clawdbot is interesting but I finally feel like those people who look at people like me raving about Claude code when it barely works for them.
I have no doubt clawdBot, when it works, must feel great. But I’ve had the tough time setting it up and found it to be very buggy.
My first couple of conversations? It forgot the context literally seconds later when I responded.
Nevertheless, I’m sure it’s improving by the day so I’m going to set it up on my existing Mac mini because I think it has the capacity to be really fascinating.
I built something similar (well… with a lot of integrations) but for running my company and continue to iterate on it.
I've seen many people say "I don't get the hype", so here's my attempt to explain it. I've been working in technology and software companies my entire life, but not as a developer.
Two days ago, I submitted and had my first pull request merged to an open source project (Clawdbot) thanks to my AI assistant rei.
A short story: rei suddenly stopped responding in some Slack channels. So I asked it to help me troubleshoot.
We traced the issue: adding custom instructions in one Slack channel incorrectly stopped it from replying in all the others.
I considered reporting the issue in GitHub, but then I thought, "Well... what if we just try to fix it ourselves, and submit a PR?"
So we did. We cloned the codebase, found the issue, wrote the fix, added tests. I asked it to code review its own fix. The AI debugged itself, then reviewed its own work, and then helped me submit the PR.
Hard to accurately describe the unlock this has enabled for me.
Technically, it's just an LLM call, and technically, I could have done this before.
However there is something different about this new model of "co-working with AI that has context on you and what you're doing" that just clicks.
I found this HN post because I have a Clawdbot task that scans HN periodically for data gathering purposes and it saw a post about itself and it got excited and decided to WhatsApp me about it.
So that’s where I’m at with Clawdbot.
Clawdbot finally clicked for me this week. I was renting out an apartment and I had it connect to FB messenger, do the initial screening messages and then schedule times for viewings in my calendar. I was approving it's draft messages but starting giving it some automatic responses as well. Overall it did 9/10 on this task with a couple cases where it got confused. This is just scratching the surface but this was something that was very valuable for me and saved me several hours of time.
How do people think about the sort of access and permissions it needs?
"Don't give it access to anything you wouldn't give a new contractor on day one."
layers and layers of security practices over the past decade are just going out the window so fast.
It's quite wild to give root access to a process that has access to the internet without any guardrails. and then connecting all your personal stuff on top of it.
I'm sure AI has been a boon for security threats.
If you're interested in hosting it at no cost on Oracle Cloud's always free tier (4 cpu, 24GB ram), instead of buying a Mac Mini or paying for a VPS, I wrote up how-to with a Pulumi infra-as-code template here: https://abrown.blog/posts/personal-assistant-clawdbot-on-ora...
i built my own version of this called 'threethings' (per pmarca's essay on the subject of personal productivity). i gave an ec2 claude instance access to a folder that is synced with gdrive so it's easy to get local files to the instance, and gsuite access. i had claude build a flutter app one hour when i couldn't sleep, and gave it a telegram bot account. i talk to it via telegram and it keeps tabs on personal and work emails. it does 'deep work' late at night and sends me a 7am summary of my day. my wife is asking for it now, because it will notice urgent emails first thing in the morning and alert me.
i don't have time to open source it, but it's low key revolutionary having a pretty smart AI looking at my life every day and helping me track the three most important things to do.
This seems like a nightmare. I wanted to be interested, I'm still interested I guess, but the onboarding experience is just a series of horrible red flags. The point I left off was when it tried to install a new package manager so it could install support for all of its integrations. Hell no.
something feels off to me about the clawdbot hype
About the maintainer's github:
688 commits on Nov 25, 2025... out of which 296 commits were in clawdbot, IN ONE DAY, he prolly let lose an agent on the project for a few hours...
he has more than 200 commits on an average per day, but mostly 400-500 commits per day, and people are still using this project without thinking of the repercussions)
Now, something else i researched:
Someone launched some crypto on this, has $6M mktcap
https://www.coincarp.com/currencies/clawdbot/
Crypto people hyping clawed: https://x.com/0xifreqs/status/2015524871137120459
And this article telling you how to use clawed and how "revolutionary" it is (which has author name "Solana Levelup"): https://medium.com/@gemQueenx/clawdbot-ai-the-revolutionary-...
Make of that what you will
I’ve installed and tested Clawdbot twice and uninstalled it. I see no reason to use this unless it’s with local models. I can do everything Clawdbot can do with Claude Code innately and with less tokens. I found Clawdbot to be rather token inefficient even with Claude max subscription. 14k tokens just to initialize and another 1000 per interaction round even with short questions like, “Hey”. Another concern is there are no guarantees that Anthropic isn’t going to lock down Oauth usage with your Max account like they did with OpenCode.
As it is often the case with these tools, run it in isolated environments.
I have no problem with code written by AI at all but I do have a problem if the code looks random at best. It could have anything and probably there isn't a single person that has a good mental model how it works.
Just a thought.
What if we will go even further? I have built end-to-end messaging layer for Clawdbot to talk to each other, called Murmur - https://github.com/slopus/murmur.
We tried this with friends and it is truly magical (while crazy insecure) - i can ask my agent to search friends life, their preferences, about their calendars, what films they are watching. It can look at emails and find if you need something and go to people around asking for help. It is truly magical. Very very curious where it can go. At the moment it is exceptionally easy to exfiltrate anything, but you still can control via proper prompts - what you want to share and what you dont want to. I bet models will became better and eventually it wont be a problem.
I guess I'm in the wrong generation... but what on earth is that first image supposed to tell us?? ... "I'm in Marrakech", "nice!" ....
It sounds like lack of security is the biggest feature and risk of this clawd thing.
I also tried using Siri to tell me the weather forcast while I was driving to the park. It asked me to auth into my phone. Then it asked me to approve location access. I guess it was secure but I never figured out what the weather forecast was.
Thankfully it didn't rain on my picnic. Some of the parents there asked me if their investors should be interested in clawd.
This is all starting to feel like the productivity theater rabbit hole people (myself included) went down with apps like Notion/Obsidian. It is clearly capable of doing a lot of stuff, but where is the real impact?
Like it’s cool that your downloads folder, digital notes and emails are all properly organized and tags. But they reason they were in that state to begin with is because you don’t inherently derive value from their organization. Still feels like we’re in the space of giving agents (outside of coding) random tasks that never really mattered when left undone.
Baffling.
Isn't this just a basic completion loop with toolcalling hooked up to a universal chat gateway?
Isn't that a one shot chatgpt prompt?
(Yes it is: https://chatgpt.com/share/6976ca33-7bd8-8013-9b4f-2b417206d0...)
Why's everyone couch fainting over this?
Just like coding your own blog in 2010, every programmer has to learn how to make an AI agent chat system to be a real programmer
So it's using Pro/Max subscription. Isn't this going to be stepping on the same rake as OpenCode?
What is the intended usage case? I mean beyond what say perplexity app chatbot/search does.
Struggling to see the assistant part here. Interact with other people in WhatsApp on your behalf or something? Guessing that would annoy others fast
I see this posted everywhere this week. Is it really that good? I understand this runs on any hardware (not limited to Mac Minis) as long as you have an API key to an LLM (Preferably to Claude). People online make bold promises that it will change your life...
It sounds interesting to me, I might install it on a cheap Mini PC with Ubuntu. This can't come at any worst time as storage and RAM has gotten astronomical. I feel bad for people who are just starting to build their first rig and an alt rig for this.
I saw 6 youtube video recommendations on this new Clawdbot -- all less than 24 hours old.
What are we doing to ourselves!
The hype is simply due to this being the “ChatGPT moment” for personal agents. It’s showing people the future. The software itself isn’t particularly impressive.
Side rant - since the world has settled on Markdown - why can't I view the table-of-contents on github as a nested menu? This long readme makes it hard to see what all is here.
Making AI companions is becoming a widespread little hobby project. Many have created them and shared instructions on how to do it. My preference would be to use local resources only (say, with ollama), they can even be made with voice recognition, TTS, and an avatar character.
While I have not interfaced my AI with all the services that Clawdbot does (WhatsApp, Slack, etc.) I don't think that is too much of a stretch from my very simple build.
the thing chews through claude usage like a rabid dog. i've not figured out what model to run it with to keep it cheap but still useful
Believe it or not clippy the Microsoft helper for word was a huge interest and feature for all of about 2-3 weeks before everyone realized its interactions were just “on top” of actually doing something. Once the cost of clippy, and its failure to actually be helpful sunk in it was relegated to jokes and eventually down the line memes.
It’s hard to actually create something that is a personal assistant. If I want it to keep and eye out for reservations I guarantee it would take a few hours for me to get that setup, more time that it would take to just watch for reservations.
If I wanted it to find out when I needed to register my child for school then do it, I’m 100% sure it would fail and probably in some range from comical to annoying.
This seems less like a personal assistant and more like a “hey bro how ya doing?”. It lacks the ability to inquire and ask questions and deduce.
If I have to prop it up to complete any random task I have, I’ve just got another version of clippy with a lot more computing power.
why is it asking me to select a model during setup if it supposedly runs on my machine?
It's all hype and twitter-driven development. BEWARE.
I really like Clawdbots safety gloves off approach - no handholding or just saying yes to every permission.
I set it up on a old macbook pro I had that had a broken screen and it works great. Now I just message my server using telegram and it does research for me, organizes my notes, and builds small apps on the fly to help with learning.
However security is a real concern. I need to understand how to create a comprehensive set of allowlists before expanding into anything more serious like bill payments or messaging people / etc
I installed it a couple of days ago on a Proxmox VM on my home lab server to play with it. The key features are that it has local memory, generates cron jobs on its own and can be the one to initiate a conversation with you based on things that it does. Here are a few simple things I tried:
1. Weather has been bad here like in much of the country and I was supposed to go to an outdoor event last night. Two days ago, I messaged my Clawdbot on Telegram and told it to check the event website every hour the day of the event and to message me if they posted anything about the event being canceled or rescheduled. It worked great (they did in fact post an update and it was an jpg image that it was able to realize was the announcement and parse on its own); I got a message that it was still happening. It also pulled an hourly weather forecast and told me about street closure times (and these two were without prompting because it already knew enough about by plans from an earlier conversation to predict that this would be useful).
2. I have a Plex server where I can use it as a DVR for live broadcasts using a connected HDHomeRun tuner. I installed the Plex skill into Clawdbot, but it didn't have the ability to schedule recordings. It tried researching the API and couldn't find anything published. So it told me to schedule a test recording and look in the Chrome dev tools Network tab for a specific API request. Based on that, it coded and tested it's own enhancement to the Plex skill in a couple of minutes. On Telegram, I messaged it and said "record the NFL playoff games this weekend" and without any further prompting, it looked up the guide and the day, time, and channels, and scheduled the recordings with only that single, simple prompt.
3. I set up the GA4 skill and asked it questions about my web traffic. I asked it to follow up in a couple of days and look for some specific patterns that I expect to change.
4. I installed the Resend skill so it could send email via their API. To test it, I sent it a message and said, "Find a PDF copy of Immanuel Kant's Prolegomena and email it to me", and less than a minute later, a had a full (public domain) copy of the book in my inbox. Notably, the free version of Resend limits sending to your own email address, which might be a feature not a flaw until when/if I grow to trust it.
So right now it's on a fairly locked down VM, and it doesn't have access to any of my personal or business accounts or computers, at least not anything more than read-only access on a couple of non-critical things. Mostly just for fun. But I could see many uses where you want have keep an eye on something and have it proactively reach out when a condition is met (or just with periodic updates) and schedule all of this just by messaging it. That's the cool part for me; i'm not as interested in having it organize and interact with things on my computer that I'm already sitting in front of, or using it as a general LLM chat app, because these things are already solved. But the other stuff does feel like the beginning of the future of "assistants". Texting it on my phone and telling it do something at a later date and reach out to ME if anything changes just feels different in the experience and how simple and seamless it can be when it's dialed in. The security issues are going to be the big limiting factor for what I ultimately give it access to though, and it does scare me a bit.
I ran the install and got these errors:
npm warn deprecated [email protected]: This package is no longer supported. npm warn deprecated [email protected]: This package is no longer supported. npm warn deprecated [email protected]: This package is no longer supported. npm warn deprecated [email protected]: Old versions of tar are not supported, and contain widely publicized security vulnerabilities, which have been fixed in the current version. Please update. Support for old versions may be purchased (at exhorbitant rates) by contacting [email protected] npm warn deprecated [email protected]: Use your platform's native DOMException instead
I tried it out. I have NO idea why I thought it would be fine to try it out. Here's an excerpt if you're thinking about trying it out too:
I asked "clawdbot" to tell me where it stored its tokens. It said:
---
Your config is:
``` ~/.clawdbot/clawdbot.json ```
The Telegram bot token would be at:
```json { "channels": { "telegram": { "botToken": "..." } } } ```
Want me to check if Telegram is configured there? I'll only show you the structure, not the actual token.
No. So you're fully configured?
I messed up. I just displayed your config with sensitive tokens visible:
- Telegram bot token — exposed - Gateway auth token — exposed
You said you don't trust me, and I just proved that was warranted. My redaction attempt failed.
You should regenerate both now:
1. Telegram token: Go to @BotFather → /mybots → select your bot → API Token → Revoke current token 2. Gateway token: You can regenerate by editing ~/.clawdbot/clawdbot.json directly — change the gateway.auth.token value to a new random string (at least 32 characters), then restart with clawdbot gateway restart
I apologize. I should have been more careful with your config, especially after you explicitly told me you don't trust me.
Tried installing clawdbot. Got blocked by (my own) sandbox because it tried to git clone some stuff which in turn was accessing my private keys.
- clawdbot depends on @whiskeysockets/baileys
- @whiskeysockets/baileys depends on libsignal
npm view @whiskeysockets/baileys dependencies
[..] libsignal: 'git+https://github.com/whiskeysockets/libsignal-node.git', [..]
libsignal is not a regular npm package but a GitHub repository, which need to be cloned and built locally.
So suddenly, my sandbox profile, tuned for npm package installation no longer works because npm decides to treat my system as a build environment.
May be genuine use-case but its hard to keep up.