logoalt Hacker News

resfirestaryesterday at 4:59 PM10 repliesview on HN

Isn't there a fourth and much more likely scenario? Some person (not OP or an AI company) used a bot to write the PR and blog posts, but was involved at every step, not actually giving any kind of "autonomy" to an agent. I see zero reason to take the bot at its word that it's doing this stuff without human steering. Or is everyone just pretending for fun and it's going over my head?


Replies

themanmarantoday at 12:14 AM

Github doesn't show timestamps in the UI, but they do in the HTML.

Looking at the timeline, I doubt it was really autonomous. More likely just a person prompting the agent for fun.

> @scottshambaugh's comment [1]: Feb 10, 2026, 4:33 PM PST

> @crabby-rathbun's comment [2]: Feb 10, 2026, 9:23 PM PST

If it was really an autonomous agent it wouldn't have taken five hours to type a message and post a blog. Would have been less than 5 minutes.

[1] https://github.com/matplotlib/matplotlib/pull/31132#issuecom...

[2] https://github.com/matplotlib/matplotlib/pull/31132#issuecom...

MisterTeayesterday at 5:15 PM

This feels like the most likely scenario. Especially since the meat bag behind the original AI PR responded with "Now with 100% more meat" meaning they were behind the original PR in the first place. It's obvious they got miffed at their PR being rejected and decided to do a little role playing to vent their unjustified anger.

show 2 replies
furyofantaresyesterday at 5:25 PM

I expect almost all of the openclaw / moltbook stuff is being done with a lot more human input and prodding than people are letting on.

I haven't put that much effort in, but, at least my experience is I've had a lot of trouble getting it to do much without call-and-response. It'll sometimes get back to me, and it can take multiple turns in codex cli/claude code (sometimes?), which are already capable of single long-running turns themselves. But it still feels like I have to keep poking and directing it. And I don't really see how it could be any other way at this point.

show 1 reply
shirroyesterday at 11:14 PM

Yeah, we are into professional wrestling territory I think. People willingly suspend their disbelief to enjoy the spectacle.

teaearlgraycoldyesterday at 5:43 PM

It’s kind of shocking the OP does not consider this, the most likely scenario. Human uses AI to make a PR. PR is rejected. Human feels insecure - this tool that they thought made them as good as any developer does not. They lash out and instruct an AI to build a narrative and draft a blog post.

I have seen someone I know in person get very insecure if anyone ever doubts the quality of their work because they use so much AI and do not put in the necessary work to revise its outputs. I could see a lesser version of them going through with this blog post scheme.

show 1 reply
ToucanLoucanyesterday at 5:19 PM

Look I'll fully cosign LLMs having some legitimate applications, but that being said, 2025 was the YEAR OF AGENTIC AI, we heard about it continuously, and I have never seen anything suggesting these things have ever, ever worked correctly. None. Zero.

The few cases where it's supposedly done things are filled with so many caveats and so much deck stacking that it simply fails with even the barest whiff of skepticism on behalf of the reader. And every, and I do mean, every single live demo I have seen of this tech, it just does not work. I don't mean in the LLM hallucination way, or in the "it did something we didn't expect!" way, or any of that, I mean it tried to find a Login button on a web page, failed, and sat there stupidly. And, further, these things do not have logs, they do not issue reports, they have functionally no "state machine" to reference, nothing. Even if you want it to make some kind of log, you're then relying on the same prone-to-failure tech to tell you what the failing tech did. There is no "debug" path here one could rely on to evidence the claims.

In a YEAR of being a stupendously hyped and well-funded product, we got nothing. The vast, vast majority of agents don't work. Every post I've seen about them is fan-fiction on the part of AI folks, fit more for Ao3 than any news source. And absent further proof, I'm extremely inclined to look at this in exactly that light: someone had an LLM write it, and either they posted it or they told it to post it, but this was not the agent actually doing a damn thing. I would bet a lot of money on it.

show 4 replies
lp0_on_fireyesterday at 10:33 PM

> Or is everyone just pretending for fun

judging by the number of people who think we owe explanations to a piece of software or that we should give it any deference I think some of them aren't pretending.

chrisjjyesterday at 5:06 PM

Plus Scenario 5: A human wrote it for LOLs.

show 2 replies
Ygg2yesterday at 6:56 PM

Ok. But why would someone do this? I hate to sound conspiratorial but an AI company aligned actor makes more sense.

show 1 reply