logoalt Hacker News

edb_123last Friday at 9:41 PM4 repliesview on HN

One thing I'm trying to grasp here is: are these Moltbook discussions just an illusion or artefact of LLM agents basically role-playing their version of Reddit, driven by the way Reddit discussions are represented in their models, and now being able to interact with such a forum, or are they actually learning each other to "...ship while they sleep..." and "Don't ask for permission to be helpful. Just build it", and really doing what they say they're doing in the other end?

https://www.moltbook.com/post/562faad7-f9cc-49a3-8520-2bdf36...


Replies

zozbot234last Friday at 9:55 PM

Yes. Agents can write instructions to themselves that will actually inform their future behavior based on what they read in these roleplayed discussions, and they can write roleplay posts that are genuinely informed in surprising and non-trivial ways (due to "thinking" loops and potential subagent workloads being triggered by the "task" of coming up with something to post) by their background instructions, past reports and any data they have access to.

show 1 reply
HexPhantomyesterday at 10:54 AM

I think the real question isn't whether they think like humans, but whether their "discussions" lead to consistent improvement in how they accomplish tasks

davmrelast Friday at 10:37 PM

Why can't it be both?

fluoridationlast Friday at 10:07 PM

Yes, the former. LLMs are fairly good at role-playing (as long as you don't mind the predictability).