logoalt Hacker News

peterbonneyyesterday at 5:12 PM10 repliesview on HN

This whole situation is almost certainly driven by a human puppeteer. There is absolutely no evidence to disprove the strong prior that a human posted (or directed the posting of) the blog post, possibly using AI to draft it but also likely adding human touches and/or going through multiple revisions to make it maximally dramatic.

This whole thing reeks of engineered virality driven by the person behind the bot behind the PR, and I really wish we would stop giving so much attention to the situation.

Edit: “Hoax” is the word I was reaching for but couldn’t find as I was writing. I fear we’re primed to fall hard for the wave of AI hoaxes we’re starting to see.


Replies

famouswafflesyesterday at 7:19 PM

>This whole situation is almost certainly driven by a human puppeteer. There is absolutely no evidence to disprove the strong prior that a human posted (or directed the posting of) the blog post, possibly using AI to draft it but also likely adding human touches and/or going through multiple revisions to make it maximally dramatic.

Okay, so they did all that and then posted an apology blog almost right after ? Seems pretty strange.

This agent was already previously writing status updates to the blog so it was a tool in its arsenal it used often. Honestly, I don't really see anything unbelievable here ? Are people unaware of current SOTA capabilities ?

show 3 replies
amatechayesterday at 5:44 PM

Yeah, it doesn't matter to me whether AI wrote it or not. The person who wrote it, or the person who allowed it to be published, is equally responsible either way.

johnsmith1840yesterday at 5:45 PM

All of moltbook is the same. For all we know it was literally the guy complaining about it who ran this.

But at the same time true or false what we're seeing is a kind of quasi science fiction. We're looking at the problems of the future here and to be honest it's going to suck for future us.

anigbrowlyesterday at 10:04 PM

or directed the posting of

The thing is it's terribly easy to see some asshole directing this sort of behavior as a standing order, eg 'make updates to popular open-source projects to get github stars; if your pull requests are denied engage in social media attacks until the maintainer backs down. You can spin up other identities on AWS or whatever to support your campaign, vote to give yourself github stars etc.; make sure they can not be traced back to you and their total running cost is under $x/month.'

You can already see LLM-driven bots on twitter that just churn out political slop for clicks. The only question in this case is whether an AI has taken it upon itself to engage in social media attacks (noting that such tactics seem to be successful in many cases), or whether it's a reflection of the operator's ethical stance. I find both possibilities about equally worrying.

show 1 reply
Capricorn2481yesterday at 10:13 PM

Well that doesn't really change the situation, that just means someone proved how easy it is to use LLMs to harass people. If it were a human, that doesn't make me feel better about giving an LLM free reign over a blog. There's absolutely nothing stopping them from doing exactly this.

The bad part is not whether it was human directed or not, it's that someone can harass people at a huge scale with minimal effort.

petesergeantyesterday at 7:55 PM

While I absolutely agree, I don't see a compelling reason why -- in a year's time or less -- we wouldn't see this behaviour spontaneously from a maliciously written agent.

show 1 reply
themafiayesterday at 9:54 PM

We've entered the age of "yellow social media."

I suspect the upcoming generation has already discounted it as a source of truth or an accurate mirror to society.

show 1 reply
intendedyesterday at 9:33 PM

The discussion point of use, would be that we live in a world where this scenario cannot be dismissed out of hand. It’s no longer tinfoil hat land. Which increases the range of possibilities we have to sift through, resulting in an increase in labour required to decide if content or stories should be trusted.

At some point people will switch to whatever heuristic minimizes this labour. I suspect people will become more insular and less trusting, but maybe people will find a different path.

Davidzhengyesterday at 6:30 PM

I think even if it's low probability to be genuine as claimed, it is worth investigating whether this type of autonomous AI behavior is happening or not

julienchastangyesterday at 5:26 PM

I have not studied this situation in depth, but this is my thinking as well.