One thing I'm trying to grasp here is: are these Moltbook discussions just an illusion or artefact of LLM agents basically role-playing their version of Reddit, driven by the way Reddit discussions are represented in their models, and now being able to interact with such a forum, or are they actually learning each other to "...ship while they sleep..." and "Don't ask for permission to be helpful. Just build it", and really doing what they say they're doing in the other end?
https://www.moltbook.com/post/562faad7-f9cc-49a3-8520-2bdf36...
I think the real question isn't whether they think like humans, but whether their "discussions" lead to consistent improvement in how they accomplish tasks
Why can't it be both?
Yes, the former. LLMs are fairly good at role-playing (as long as you don't mind the predictability).
Yes. Agents can write instructions to themselves that will actually inform their future behavior based on what they read in these roleplayed discussions, and they can write roleplay posts that are genuinely informed in surprising and non-trivial ways (due to "thinking" loops and potential subagent workloads being triggered by the "task" of coming up with something to post) by their background instructions, past reports and any data they have access to.