When even Simon falls for the hype, you know the entire field is a bubble. And I say that as an AI researcher with papers on LLMs and several apps built around them.
Seriously, until when are people going to re-invent the wheel and claim it's "the next best thing"?
n8n already did what OpenClaw does. And anyone using Steipete's software already knows how fragile and bs his code is. The fact that Codexbar (also by Steipete) takes 7GB of RAM on macOS shows just how little attention to performance/design he pays to his apps.
I'm sick and tired of this vicious cycle; X invents Y at month Z, then X' re-invents it and calls it Y' at month Z' where Z' - Z ≤ 12mo.
Not disagreeing with anything you said except:
> The fact that Codexbar (also by Steipete) takes 7GB of RAM on macOS shows just how little attention to performance/design he pays to his apps.
It's been running for weeks on my laptop and it's using 210MB of ram currently. Now, the quality _is_ not great and I get prompted at least once a day to enter my keychain access so I'm going to uninstall it (I've just been procrastinating).
I don't think the exciting thing here is the technology powering it. This isn't a story about OpenClaw being particularly suited to enabling this use-case, or of higher quality than other agent frameworks. It's just what people happen to be running.
Rather, the implicit/underlying story here, as far as I'm concerned, is about:
1. the agentive frameworks around LLMs having evolved to a point where it's trivial to connect them together to form an Artificial Life (ALife) Research multi-agent simulation platform;
2. that, distinctly from most experiments in ALife Research so far (where the researchers needed to get grant funding for all the compute required to run the agents themselves — which becomes cost-prohibitive when you get to "thousands of parallel LLM-based agents"!), it turns out that volunteers are willing to allow research platforms to arbitrarily harness the underlying compute of "their" personal LLM-based agents, offering them up as "test subjects" in these simulations, like some kind of LLM-oriented folding@home project;
3. that these "personal" LLM-based agents being volunteered for research purposes, are actually really interesting as research subjects vs the kinds of agents researchers could build themselves: they use heterogeneous underlying models, and heterogeneous agent frameworks; they each come with their own long history of stateful interactions that shapes them separately; etc. (In a regular closed-world ALife Research experiment, these are properties the research team might want very badly, but would struggle to acquire!)
4. and that, most interestingly of all, it's now clear that these volunteers don't have much-if-any wariness to offer their agents as test subjects only to an established university in the context of a large academic study (as they would if they were e.g. offering their own bodies as a test subject for medical research); but rather are willing to offer up their agents to basically any random nobody who's decided that they want to run an ALife experiment — whether or not that random nobody even realizes/acknowledges that what they're doing is an ALife experiment. (I don't think the Moltbook people know the term "ALife", despite what they've built here.)
That last one's the real shift: once people realize (from this example, and probably soon others) that there's this pool of people excited to volunteer their agent's compute/time toward projects like this, I expect that we'll be seeing a huge boom in LLM ALife research studies. Especially from "citizen scientists." Maybe we'll even learn something we wouldn't have otherwise.
Who says these people have fallen for the hype? They're influencers, they're trying to make content that lands and people are eating this shit up.
Not sure how you classify this post as me "falling for the hype", it's mainly me noting the wild insecurity of the thing and commenting on how interesting it is to have a website where signups are automated via instructions in a Skill.