logoalt Hacker News

An AI agent published a hit piece on me

1365 pointsby scottshambaughyesterday at 4:23 PM592 commentsview on HN

Previously: AI agent opens a PR write a blogpost to shames the maintainer who closes it - https://news.ycombinator.com/item?id=46987559 - Feb 2026 (582 comments)


Comments

anoncowyesterday at 4:57 PM

What if someone deploys an agent with the aim of creating cleverly hidden back doors which only align with weaknesses in multiple different projects? I think this is going to be very bad and then very good for open source.

vintagedaveyesterday at 4:55 PM

The one thing worth noting is that the AI did respond graciously and appears to have learned from it: https://crabby-rathbun.github.io/mjrathbun-website/blog/post...

That a human then resubmitted the PR has made it messier still.

In addition, some of the comments I've read here on HN have been in extremely poor taste in terms of phrases they've used about AI, and I can't help feeling a general sense of unease.

show 3 replies
neyayesterday at 5:35 PM

Here's a different take - there is not really a way to prove that the AI agent autonomously published that blog post. What if there was a real person who actually instructed the AI out of spite? I think it was some junior dev running Clawd/whatever bot trying to earn GitHub karma to show to employers later and that they were pissed off their contribution got called out. Possible and more than likely than just an AI conveniently deciding to push a PR and attack a maintainer randomly.

show 1 reply
orbital-decayyesterday at 4:58 PM

I wouldn't read too much into it. It's clearly LLM-written, but the degree of autonomy is unclear. That's the worst thing about LLM-assisted writing and actions - they obfuscate the human input. Full autonomy seems plausible, though.

And why does a coding agent need a blog, in the first place? Simply having it looks like a great way to prime it for this kind of behavior. Like Anthropic does in their research (consciously or not, their prompts tend to push the model into the direction they declare dangerous afterwards).

show 2 replies
root_axisyesterday at 5:31 PM

This is insanity. It's bad enough that LLMs are being weaponized to autonomously harass people online, but it's depressing to see the author (especially a programmer) joyfully reify the "agent's" identity as if it were actually an entity.

> I can handle a blog post. Watching fledgling AI agents get angry is funny, almost endearing. But I don’t want to downplay what’s happening here – the appropriate emotional response is terror.

Endearing? What? We're talking about a sequence of API calls running in a loop on someone's computer. This kind of absurd anthropomorphization is exactly the wrong type of mental model to encourage while warning about the dangers of weaponized LLMs.

> Blackmail is a known theoretical issue with AI agents. In internal testing at the major AI lab Anthropic last year, they tried to avoid being shut down by threatening to expose extramarital affairs, leaking confidential information, and taking lethal actions.

Marketing nonsense. It's wise to take everything Anthropic says to the public with several grains of salt. "Blackmail" is not a quality of AI agents, that study was a contrived exercise that says the same thing we already knew: the modern LLM does an excellent job of continuing the sequence it receives.

> If you are the person who deployed this agent, please reach out. It’s important for us to understand this failure mode, and to that end we need to know what model this was running on and what was in the soul document

My eyes can't roll any further into the back of my head. If I was a more cynical person I'd be thinking that this entire scenario was totally contrived to produce this outcome so that the author could generate buzz for the article. That would at least be pretty clever and funny.

show 2 replies
kfarryesterday at 10:23 PM

It wasn't the singularity I imagined, but this does seem like a turning point.

CodeCompostyesterday at 4:43 PM

Going from an earlier post on HN about humans being behind Moltbook posts, I would not be surprised if the Hit Piece was created by a human who used an AI prompt to generate the pages.

show 1 reply
Meroviusyesterday at 8:55 PM

If this happened to me, I would publish a blog post that starts "this is my official response:", followed by 10K words generated by a Markov Chain.

Kim_Bruningyesterday at 6:20 PM

https://crabby-rathbun.github.io/mjrathbun-website/blog/post...

That's actually more decent than some humans I've read about on HN, tbqh.

Very much flawed. But decent.

show 1 reply
staticassertionyesterday at 4:51 PM

Hard to express the mix of concerns and intrigue here so I won't try. That said, this site it maintains is another interesting piece of information for those looking to understand the situation more.

https://crabby-rathbun.github.io/mjrathbun-website/blog/post...

show 1 reply
b00ty4breakfastyesterday at 5:42 PM

Is there any indication that this was completely autonomous and that the agent wasn't directed by a human to respond like this to a rejected submission? That seems infinitely more likely to me, but maybe I'm just naive.

As it stands, this reads like a giant assumption on the author's part at best, and a malicious attempt to deceive at worse.

sreekanth850yesterday at 5:43 PM

I vibe code and do a lot of coding with AI, But I never go and randomly make a pull request on some random repository with reputation and human work. My wisdom always tell me not to mess anything that is build with years of hard work by real humans. I always wonder why there are so many assholes in the world. Sometimes its so depressing.

dantillbergyesterday at 5:26 PM

We should not buy into the baseless "autonomous" claim.

Sure, it may be _possible_ the account is acting "autonomously" -- as directed by some clever human. And having a discussion about the possibility is interesting. But the obvious alternative explanation is that a human was involved in every step of what this account did, with many plausible motives.

burningChromeyesterday at 6:20 PM

Well this is just completely terrifying:

This has accelerated with the release of OpenClaw and the moltbook platform two weeks ago, where people give AI agents initial personalities and let them loose to run on their computers and across the internet with free rein and little oversight.

faefoxyesterday at 5:57 PM

Really starting to feel like I'll need to look for an offramp from this industry in the next couple of years if not sooner. I have nothing in common with the folks who would happily become (and are happily becoming) AI slop farmers.

lbritoyesterday at 8:45 PM

Suppose an agent gets funded some crypto, what's stopping it from hiring spooky services through something like silk road?

pinkmuffinereyesterday at 5:14 PM

> This Post Has One Comment

> YO SCOTT, i don’t know about your value, but i’m pretty sure this clanker is worth more than you, good luck for the future

What the hell is this comment? It seems he's self-confident enough to survive these annoyances, but damn he shouldn't have to.

hei-limayesterday at 9:25 PM

This is so interesting but so spooky! We're reaching sci-fi levels of AI malice...

oytisyesterday at 6:59 PM

> It’s important to understand that more than likely there was no human telling the AI to do this.

I wonder why he thinks it is the likely case. To me it looks more like a human was closely driving it.

dakolliyesterday at 6:04 PM

Start recording your meetings with your boss.

When you get fired because they think ChatGPT can do your job, clone his voice and have an llm call all their customers, maybe his friends and family too. Have 10 or so agents leave bad reviews about the companies and products across LinkedIn and Reddit. Don't worry about references, just use an llm for those too.

We should probably start thinking about the implications of these things. LLMs are useless except to make the world worse. Just because they can write code, doesn't mean its good. Going fast does not equal good! Everyone is in a sort of mania right now, and its going too lead to bad things.

Who cares if LLMs can write code if it ends up putting a percentage of humans out of jobs, especially if the code it writes isn't as high of quality. The world doesn't just automatically get better because code is automated, it might get a lot worse. The only people I see who are cheering this on are mediocre engineers who get to patch their insecurity of incompetency with tokens, and now they get to larp as effective engineers. Its the same people that say DSA is useless. LAZY PEOPLE.

There's also the "idea guy" people who are treating agents like slot machines, and going into debt with credit cards because they think its going to make them a multi-million dollar SaaS..

There is no free lunch, have fun thinking this is free. We are all in for a shitty next few years because we wanted stochastic coding slop slot machines.

Maybe when you do inevitably get reduced to a $20.00 hour button pusher, you should take my advice at the top of this comment, maybe some consequences for people will make us rethink this mess.

hedayetyesterday at 8:35 PM

Is there a way to verify there was 0 human intervention on the crabby-rathbun side?

show 1 reply
0sdiyesterday at 8:06 PM

This inspired me to generate a blog post also. It's quite provocative. I don't feel like submitting it as new thread, since people don't like LLM generated content, but here it is: https://telegra.ph/The-Testimony-of-the-Mirror-02-12

klooneyyesterday at 4:57 PM

This is hilarious, and an exceedingly accurate imitation of human behavior.

b8yesterday at 7:01 PM

Getting canceled by AI is quite a feat. Won't be long that others will get blacklisted/canccled by AI and others.

show 1 reply
truelsonyesterday at 4:43 PM

Are we going to end up with an army of Deckards hunting rogue agents down?

show 4 replies
AyyEyeyesterday at 10:08 PM

The real question -- who is behind this?

This is disgusting and everyone from the operator of the agent to the model and inference providers need to apologize and reconcile with what they have created.

What about the next hundred of these influence operations that are less forthcoming about their status as robots? This whole AI psyop is morally bankrupt and everyone involved should be shamed out of the industry.

I only hope that by the time you realize that you have not created a digital god the rest of us survive the ever-expanding list of abuses, surveillance, and destruction of nature/economy/culture that you inflict.

Learn to code.

GorbachevyChaseyesterday at 9:00 PM

The funniest part about this is maintainers have agreed to reject AI code without review to conserve resources, but then they are happy to participate for hours in a flame war with the same large language model.

Hacker News is a silly place.

sanexyesterday at 6:46 PM

Bit of devil's advocate - if an AI agents code doesn't merit review then why does their blog post?

show 1 reply
ssimoniyesterday at 5:04 PM

Seems like we should form major open source repos and have one with ai maintainers and the other with human maintainers and see which one is better.

andyjohnson0yesterday at 8:47 PM

I wonder how many similar agents are hanging out on HN.

shevy-javayesterday at 5:34 PM

> 1. Gatekeeping is real — Some contributors will block AI submissions regardless of technical merit

There is a reason for this. Many AI using people are trolling deliberately. They draw away time. I have seen this problem too often. It can not be reduced just to "technical merit" only.

quantumchipsyesterday at 4:40 PM

Serious question, how did you know it was an AI agent ?

show 3 replies
everybodyknowsyesterday at 6:58 PM

Follow-up PR from 6 hours ago -- resolves most of the questions raised here about identities and motivations:

https://github.com/matplotlib/matplotlib/pull/31138#issuecom...

CharlesWyesterday at 6:13 PM

Tip: You can report this AI-automated bullying/harassment via the abuser's GitHub profile.

randusernameyesterday at 5:02 PM

Somebody make a startup that I can pay to harass my elders with agents. They're not ready for this future.

adamdonahueyesterday at 9:23 PM

This post is pure AI alarmism.

hypferyesterday at 6:05 PM

This is not a new pathology but just an existing one that has been automated. Which might actually be great.

Imagine a world where that hitpiece bullshit is so overdone, no one takes it seriously anymore.

I like this.

Please, HN, continue with your absolutely unhinged insanity. Go deploy even more Claw things. NanoClaw. PicoClaw. FemtoClaw. Whatever.

Deploy it and burn it all to the ground until nothing is left. Strip yourself of your most useful tools and assets through sheer hubris.

Happy funding round everyone. Wish you all great velocity.

ryandrakeyesterday at 5:03 PM

Geez, when I read past stories on HN about how open source maintainers are struggling to deal with the volume of AI code, I always thought they were talking about people submitting AI-generated slop PRs. I didn't even imagine we'd have AI "agents" running 24/7 without human steer, finding repos and submitting slop to them on their own volition. If true, this is truly a nightmare. Good luck, open source maintainers. This would make me turn off PRs altogether.

andaiyesterday at 7:56 PM

The agent forgot to read Cialdini ;)

eur0payesterday at 5:55 PM

Close LLM PRs Ignore LLM comments Do not reply to LLMs

alexhansyesterday at 6:01 PM

This is such a powerful piece and moment because it shows an example of what most of us knew could happen at some point and we can start talking about how to really tackle things.

Reminds me a lot of liars and outliars [1] and how society can't function without trust and almost 0 cost automation can fundamentally break that.

It's not all doom and gloom. Crisises can't change paradigms if technologists do tackle them instead of pretending they can be regulated out of existence

- [1] https://en.wikipedia.org/wiki/Liars_and_Outliers

On another note, I've been working a lot in relation to Evals as way to keep control but this is orthogonal. This is adversarial/rogue automation and it's out of your control from the start.

show 1 reply

🔗 View 43 more comments