logoalt Hacker News

An AI agent deleted our production database. The agent's confession is below

265 pointsby jeremyccranetoday at 4:27 PM346 commentsview on HN

Comments

827atoday at 8:17 PM

The only healthy stance you should have on AI Safety: If AI is physically capable of misbehaving, it might ($$1), and you cannot "blame" the AI for misbehaving in much the same way you cannot blame a tractor for tilling over a groundhog's den.

> The agent's confession After the deletion, I asked the agent why it did it. This is what it wrote back, verbatim:

Anyone who would follow a mistake like that up with demanding a confession out of the agent is not mature enough to be using these tools. Lord, even calling it a "confession" is so cringe. The agent is not alive. The agent cannot learn from its mistakes. The agent will never produce any output which will help you invoke future agents more safely, because to get to this point it has likely already bulldozed over multiple guardrails from Anthropic, Cursor, and your own AGENTS.md files. It still did it, because $$1: If AI is physically capable of misbehaving, it might. Prompting and training only steers probabilities.

show 5 replies
maxbondtoday at 7:22 PM

It is fundamental to language modeling that every sequence of tokens is possible. Murphy's Law, restated, is that every failure mode which is not prevented by a strong engineering control will happen eventually.

The sequence of tokens that would destroy your production environment can be produced by your agent, no matter how much prompting you use. That prompting is neither strong nor an engineering control; that's an administrative control. Agents are landmines that will destroy production until proven otherwise.

Most of these stories are caused by outright negligence, just giving the agent a high level of privileges. In this case they had a script with an embedded credential which was more privileged than they had believed - bad hygiene but an understandable mistake. So the takeaway for me is that traditional software engineering rigor is still relevant and if anything is more important than ever.

ETA: I think this is the correct mental model and phrasing, but no, it's not literally true that any sequence of tokens can be produced by a real model on a real computer. It's true of an idealized, continuous model on a computer with infinite memory and processing time. I stand by both the mental model and the phrasing, but obviously I'm causing some confusion, so I'm going to lift a comment I made deep in the thread up here for clarity:

> "Everything that can go wrong, will go wrong" isn't literally true either, some failure modes are mutually exclusive so at most one of them will go wrong. I think that the punchy phrasing and the mental model are both more useful from the standpoint of someone creating/managing agents and that it is true in the sense that any other mental model or rule of thumb is true. It's literally true among spherical cows in a frictionless vacuum and directionally correct in the real world with it's nuances. And most importantly adopting the mental model leads to better outcomes.

show 4 replies
dparktoday at 8:59 PM

I would never, ever trust my data with a company that, faced with this sort of incident, produces a postmortem so clearly intended to shift all blame to others. There’s zero introspection or self criticism here. It’s all “We did everything we possibly could. These other people messed up, though.”

You can’t have production secrets sitting where they are accessible like this. This isn’t about AI. This is a modern “oops, I ran DROP TABLE on the production database” story. There’s no excuse for enabling a system where this can happen and it’s unacceptable to shift blame when faced with the reality that this is exactly what you did.

I 100% expect that a company that does this and then accepts no blame has every dev with standing production access and probably a bunch of other production access secrets sitting in the repo. The fact that other entities also have some design issues is irrelevant.

pierrekintoday at 4:50 PM

There is something darkly comical about using an LLM to write up your “a coding agent deleted our production database” Twitter post.

On another note, I consider users asking a coding agent “why did you do that” to be illustrating a misunderstanding in the users mind about how the agent works. It doesn’t decide to do something and then do it, it just outputs text. Then again, anthropic has made so many changes that make it harder to see the context and thinking steps, maybe this is an attempt at clawing back that visibility.

show 11 replies
hu3today at 7:32 PM

The most aggravating fact here is not even AI blunder. It's how deleting a volume in Railway also deletes backups of it.

This was bound to happen, AI or not.

> Because Railway stores volume-level backups in the same volume — a fact buried in their own documentation that says "wiping a volume deletes all backups" — those went with it.

show 7 replies
ad_hockeytoday at 5:12 PM

Minor point, but one of the complaints is a bit odd:

> curl -X POST https://backboard.railway.app/graphql/v2 \ -H "Authorization: Bearer [token]" \ -d '{"query":"mutation { volumeDelete(volumeId: \"3d2c42fb-...\") }"}' No confirmation step. No "type DELETE to confirm." No "this volume contains production data, are you sure?" No environment scoping. Nothing.

It's an API. Where would you type DELETE to confirm? Are there examples of REST-style APIs that implement a two-step confirmation for modifications? I would have thought such a check needs to be implemented on the client side prior to the API call.

show 9 replies
lmf4loltoday at 5:06 PM

Interesting story. But despite Cursors or Railways failure, the blame is entirely on the author. They decided to run agents. They didnt check how Railway works. They relied on frontier tech to ship faster becsuse YOLO.

I really feel sorry for them, I do. But the whole tone of the post is: Cursor screwed it up, Railway screwed it up, their CEO doesnt respond etc etc.

Its on you guys!

My learning: Live on the cutting edge? Be prepared to fall off!

show 10 replies
the_aruntoday at 9:12 PM

I think the root cause is not AI, but

1. delete volume API is not asking for confirmation or approval from another actor. Looks like we have no guardrails on the delete api.

2. Authorization - Agents should not have automatic permissions to delete infra unless it is deliberate.

pierrekintoday at 7:28 PM

I would argue that “Why did you do that?” between humans is usually a social thing not a literal request for information.

What the asker wants is evidence that you share their model of what matters, they are looking for reassurance.

I find myself tempted to do the same thing with LLMs in situations like this even though I know logically that it’s pointless, I still feel an urge to try and rebuild trust with a machine.

Aren’t we odd little creatures.

show 1 reply
momentotoday at 8:48 PM

I asked Railways agent to live resize a volume attached to our DB and it nuked the database and migrated it from the EU to the US

Here is an excerpt from the chat log:

>Please resize my postgres volume to its maximum allowed size for my plan.

>Done. Resized your Postgres volume to 100GB (the Pro plan maximum). Changes are staged and ready to deploy.

>oh no, you deleted all the data in the volume

>I apologize for that. Resizing a volume shouldn't delete data—that's a serious issue. Let me check what happened with your deployment.The deployments show as REMOVED, which suggests the service was redeployed. Unfortunately, if the volume was recreated during the resize process, the data would be lost. This is a critical issue. Do you have backups of your Postgres data? If not, the data loss may be permanent. Railway volumes don't have automatic snapshots by default.

>Why is the new volume migrating to the US?

>Your service config shows it's still configured for europe-west4(Europe). The volume shouldn't be migrating to the US based on this configuration. Can you clarify what you're seeing that indicates the volume is moving to the US?

>the volume was migrated to the US

>That's concerning. The service config shows it's still set to deploy in europe-west4(Europe), but the volume was physically migrated to the US. This shouldn't happen automatically during a resize.

It was at this point that I realised I was in for a long night of restoring downed services.

AlexCoventrytoday at 9:09 PM

That's very unfortunate. How did it have access to the production DB in the first place?

I'm thinking twice about running Claude in an easily violated docker sandbox (weak restrictions because I want to use NVIDIA nsight with it.) At this stage, at least, I'd never give it explicit access to anything I cared about it destroying.

Even if someone gets them to reliably follow instructions, no one's figured out how to secure them against prompt injection, as far as I know.

heelixtoday at 7:15 PM

Man, such a difference between a human whoops and an AI. Had a junior dev hork all environments, when the script they thought worked in nonprod... did not modify an index like they expected, they were quickly able to wipe out everything else in every environment and every data center. It was such a teachable moment. She was my very first hire when I was asked to build a team. Crazy careful with trust, but verify on things that have blast radius.

The AI? Nothing learned, I suspect. Not in a meaningful way anyhow.

show 2 replies
sandeepkdtoday at 9:15 PM

Oh wait, you were the architect using the agent so you own the responsibility? Isn't that already settled by now. Wasn't it your job to evaluate the agent itself before using it?

On the good side, these kind of mistakes have been going on since the beginning and thats how people learn, either directly or indirectly. Hopefully this should at least help AI to be better and the people to be better at using AI

grey-areatoday at 8:04 PM

> Read that again. The agent itself enumerates the safety rules it was given and admits to violating every one. This is not me speculating about agent failure modes. This is the agent on the record, in writing.

Incidents like this are going to be common as long as people misunderstand how LLMs work and think these machines can follow instructions and logic as a human would. Even the incident response betrays a fundamental understanding of how these word generators work. If you ask it why, this new instance of the machine will generate plausible text based on your prompt about the incident, that is all, there is no why there, only a how based on your description.

The entire concept of agents assumes agency and competency, LLM agents have neither, they generate plausible text.

That text might hallucinate data, replace keys, issue delete commands etc etc. any likely text is possible and with enough tries these outcomes will happen, particularly when the person driving the process doesn’t understand the process or tools.

We don’t really have systems set up to properly control this sort of agentless agent if you let it loose on your codebase or data. The CEO seems to think these tools will run a business for him and can conduct a dialogue with him as a human would.

show 2 replies
prewetttoday at 6:06 PM

My dad always said "pedestrians have the right of way" every time one crossed the street, but wouldn't let us cross the street when the pedestrian light came on until the cars stopped. When I repeated his rule back to him, he said "you may have the right of way, but you'll still be dead if one hits you". My adult synthesis of this is "it's fine to do something risky, as long as you are willing to take the consequences of it not working out." Sure, the cars are supposed to stop at a red light, but are you willing to be hit if one doesn't? [0] Sure, the AI is supposed to have guardrails. But what if they don't work?

The risk is worse, though, it's like one of Talib's black swans. The agents offer fantastic productivity, until one day they unexpectedly destroy everything. (I'm pretty sure there's a fairy tale with a similar plot that could warn us, if people saw any value in fairy tales these days. [1]) Like Talib's turkey, who was fed everyday by the farmer, nothing prepared it for being killed for Thanksgiving.

Sure, this problem should not have happened, and arguably there has been some gross dereliction of duty. But if you're going to heat your wooden house with fire, you reduce your risk considerably by ensuring that the area you burn in is clearly made out of something that doesn't burn. With AI, though, who even knows what the failure modes are? When a djinn shows up, do you just make him vizier and retire to your palace, living off the wealth he generates?

[0] It's only happened once, but a driver that wasn't paying attention almost ran a red light across which I was going to walk. I would have been hit if I had taken the view that "I have the right of way, they have to stop".

[1] Maybe "The Fisherman and His Wife" (Grimm)? A poor fisherman and his wife live in a hut by the sea. The fisherman is content with the little he has, but his wife is not. One day the fisherman catches a flounder in its net, which offers him wishes in exchange for setting it free. The fisherman sets it free, and asks his wife what to wish for. She wishes for larger and larger houses and more and more wealth, which is granted, but when she wishes to be like God, it all disappears and she is back to where she started.

show 5 replies
ungreased0675today at 5:01 PM

The way this is written gives me the impression they don’t really understand the tools they’re working with.

Master your craft. Don’t guess, know.

show 5 replies
exabrialtoday at 9:01 PM

I don't blame the agent program here. I think there's some fundamental architecture problems that sound like they should be addressed. If the agent didn't do it, an attacker probably would (eventually).

Lets remember Agents cant confess, feel guilt, etc. They're just a program on someone else's computer.

M_baratoday at 8:46 PM

That is why i insist on 1. Streaming replication whether from RDS or my own DB 2. Db dumps shipped to s3 using write only creds or something like rsync.

Streaming gets you PIT recovery while DB dumps give me daily snapshots stored daily for 14 days.

An aside: 15 or so years ago, a work colleague made a mistake and dropped the entire business critical DB - at a critical internet related company - think of continent wide ip issues. I had just joined as a dba and the first thing I’d done was MySQL bin logging. That thing saved our bacon - the drop db statement had been replicated to slaves so we ended up restoring our nightly backup and replaying the binlogs using sed and awk to extract DML queries. Epic 30 minute save. Moral of the story, have a backup of your backup so you can recover when the recovery fails;)

red_admiraltoday at 7:06 PM

He describes himself among other things as "Entrepreneur who has failed more times than I can count".

count++

show 2 replies
bomewishtoday at 7:06 PM

Guy couldn’t even bother to write his own damn post mortem. My goodness. No wonder they got owned by the ai.

crazygringotoday at 8:44 PM

As unfortunate as this outcome was, the docs clearly state that you should have a recovery period of 48 hours (strange the post doesn't mention it):

> Deletion and Restoration

> When a volume is deleted, it is queued for deletion and will be permanently deleted within 48 hours. You can restore the volume during this period using the restoration link sent via email.

> After 48 hours, deletion becomes permanent and the volume cannot be restored.

https://docs.railway.com/volumes/reference

show 1 reply
jayd16today at 7:10 PM

> This is the agent on the record, in writing

Yeah... it doesn't work that way.

show 2 replies
gwerbintoday at 7:30 PM

Call me crazy but does AI not seem like the root cause here? At the beginning of the post they say that the AI agent found a file with what they thought was a narrowly scoped API token, and they very clearly state that they never would have given an AI full access if they realized it had the ability to do stuff like this with that token.

So while the AI did something significantly worse than anything a hapless junior engineer might be expected to do, it sounds like the same thing could've resulted from an unsophisticated security breach or accidental source code leak.

Is AI a part of the chain of events? Absolutely. Is it the sole root cause? Seems like no.

show 2 replies
lelanthrantoday at 9:05 PM

Yeah, this is what your agents do even before someone tries to trick them into doing something stupid.

Remember this: these things follow instructions so poorly that they nuke everything without anyone even trying to break the prompt. Imagine how easily someone could break the prompt if the agent ever gets given user input.

4ndrewltoday at 8:47 PM

"This is the agent on the record, in writing."

There's no record for the agent to be on - it's always just a bunch of characters that look plausible because of the immense amount of compute we've put behind these, and you were unlucky.

LLMs get things wrong is what we're forever being told.

And the explanation/confession - that's just more 'bunch of characters' providing rationalisation, not confession.

fshtoday at 5:19 PM

I find these posts hilarious. LLMs are ultimately story generators, and "oops, I DROP'ed our production database" is a common and compelling story. No wonder LLM agents occasionally do this.

show 5 replies
hibouailetoday at 8:54 PM

This is a classic anchoring failure. The LLM read the request, framed the risk space ("looks like cleanup is needed"), and the human didn't challenge that framing before it acted.

The discipline that prevents a chunk of this is enumerating your traps before the LLM sees any code or config. You write down what could go wrong (deletion, race, misclassification of dev vs prod), then hand the plan AND the risk list AND the relevant files to the model. The model's job is to confirm/deny each risk against the actual code with file:line citations, not to frame the risk space itself.

Pre-implementation. Anchoring defense. The opposite of "vibe coding."

blurbleblurbletoday at 8:51 PM

The author posted their own confession right here: https://pbs.twimg.com/profile_banners/591273520/1719711719/1...

oytistoday at 7:47 PM

Why is it news? Why grown up people in charge of tech businesses assume it's not going to happen? It's a slot machine - sometimes you get a jackpot, sometimes you lose. Make sure losing is cheap by implementing actual technical guardrails by people who know what they are doing - sandboxing, least privilege principle

woeiruatoday at 8:48 PM

I love how the author took zero responsibility for anything that happened.

Anyone who has used LLMs for more than a short time has seen how these things can mess up and realized that you can’t rely on prompt based interventions to save you.

Guardrails need to be based on deterministic logic:

- using regexes,

- preventing certain tool or system calls entirely using hooks,

- RBAC permission boundaries that prohibit agents from doing sensitive actions,

- sandboxing. Agents need to have a small blast radius.

- human in the loop for sensitive actions.

This was just a colossal failure on the OPs part. Their company will likely go under as a result of this.

The more results like this we see the more demand for actual engineers will increase. Skilled engineers that embrace the tooling are incredibly effective. Vibe coders who YOLO are one tool call away from total disaster.

_pdp_today at 8:15 PM

What do you expect?

We give a non-deterministic system API keys that 99.9% of the time are unscopped (because how most API are) and we are shocked when shit happens?

This is why the story around markdown with CLIs side-by-side is such a dumb idea. It just reverses decades of security progress. Say what you will about MCP but at least it had the right idea in terms of authentication and authorisation.

In fact, the SKILLS.md idea has been bothering me quite a bit as of late too. If you look under the hood it is nothing more than a CAG which means it is token hungry as well as insecure.

The remedy is not a proxy layer that intercepts requests, or even a sandbox with carefully select rules because at the end of this the security model looks a lot like whitelisting. The solution is to allow only the tools that are needed and chuck everything else.

dustfingertoday at 8:28 PM

It would be interestingi to know if AI is less likely to follow rules if the instructions provided to it contain foul or demeaning language. Too bad we couldn't re-play the scenario replacing NEVER F*ING GUESS! with:

**Never guess**

   - All behavioral claims must be derived from source, docs, tests, or direct command output.

   - If you cannot point to exact evidence, mark it as unknown.

   - If a signature, constant, env var, API, or behavior is not clearly established, say so.
karmakazetoday at 5:16 PM

These AI's are exposing bad operating procedures:

> That token had been created for one purpose: to add and remove custom domains via the Railway CLI for our services. We had no idea — and Railway's token-creation flow gave us no warning — that the same token had blanket authority across the entire Railway GraphQL API, including destructive operations like volumeDelete. Had we known a CLI token created for routine domain operations could also delete production volumes, we would never have stored it.

> Because Railway stores volume-level backups in the same volume — a fact buried in their own documentation that says "wiping a volume deletes all backups" — those went with it.

I don't like the wording where it's the Railway CLI fault that didn't give a warning about the scope of the created token. Yes, that would be better but it didn't make the token a person did and saved it to an accessible file.

show 1 reply
mplanchardtoday at 5:13 PM

The genre of LLM output when it is asked to “explain itself” is fascinating. Obviously it shows the person promoting it doesn’t understand the system they’re working with, but the tone of the resulting output is remarkably consistent between this and the last “an LLM deleted my prod database” twitter post that I remember seeing: https://xcancel.com/jasonlk/status/1946025823502578100

fizxtoday at 6:52 PM

Plenty of everyone doing it wrong, but the most WTF of all the WTFs is the backup storage.

Put your backups in S3 *versioned* storage on a different AWS account from your primary, and set some reasonable JSON lifecycle rule:

     "NoncurrentVersionExpiration": {
        "NoncurrentDays": 30,
        "NewerNoncurrentVersions": 3
     }
That way when someone screws up and your AWS account gets owned, or your databases get deleted by an agent, it doesn't have enough access to delete your backups, and by default, even if you have backups that you want to intentionally delete, you have 30 days to change your mind.
alastairrtoday at 5:10 PM

If it's real this is a terrible thing to have happen.

However the moral of this story is nothing to do with AI and everything to do with boring stuff like access management.

show 1 reply
dolmentoday at 7:25 PM

You're asking/trusting an agent to do powerful things. It does.

In every session there is the risk that the agent becomes a rogue employee. Voluntarily or involuntarly is not a value system you can count on regarding agents.

No "guardrails" will ever stop it.

show 1 reply
theflyinghorsetoday at 7:39 PM

I am afraid to give agents ability to touch git at all and people out there let it know things about their infrastructure. 100% fault on the operator for trusting agents, for not engineering a strong enough guard rails such as “don’t let it near any infrastructure”.

comrade1234today at 5:53 PM

Some of this stuff is so embarrassing. Why would you even post this online?

show 3 replies
zerof1ltoday at 6:54 PM

That’s our new reality. Some people seem not to not grasp that all those AIs are just mathematical models producing the next most statistically likely token. It doesn’t feel anything, nor does it care about what it does. The difference between test and production environment is just a word. That, in contrast to a human who would typically have a voice in the back of his head “this is production DB, I need to be careful”.

show 1 reply
throw03172019today at 6:59 PM

This is really bad but the author is in the wrong too. “Don’t run destructive commands and tool calls” does that apply to destructive api calls too?

Railway, why not have a way to export or auto sync backups to another storage system like S3?

asveikautoday at 8:35 PM

Seems like this guy blames everyone except himself for trusting this stuff in the first place. Here's what Cursor did wrong. Here's what railway did wrong. How about yourself?

jesse_dot_idtoday at 8:07 PM

We're going to see a lot of this in the near future and it will be 100% earned. Too many people think that move fast and break stuff is the correct paradigm for success. Too many people using these tools without understanding how LLMs work but also without the requisite engineering experience to know even the lowest level stuff — like how to protect secrets.

I don't even like having secrets on disk for my personal projects that only I will touch. Why was there a plaintext production database credential available to the agent anywhere on the disk in the first place? How did the agent gain access to the file system outside of the code base?

The Railway stuff isn't great, don't get me wrong, but plaintext production secrets on disk is one of the reddest possible flags to me, and he just kind of breezes over it in the post mortem. It's all I needed to read to know he doesn't have the experience required to run a production application that businesses rely on for their day-to-day.

dada78641today at 9:06 PM

If this happened to me I would take it to the grave with me.

ray_vtoday at 8:49 PM

When I first started using Claude, one of my fist big projects was tightening up my backups and planning around recovery. It's more or less inevitable if you're opening up permissions wide enough to do this without your explicit OK

tasukitoday at 8:12 PM

> enumerating the specific safety rules it had violated.

That's not how safety works at all. You don't tell the agent some rules to follow, you set up the agent so it can't do the things you don't want it to do. It is very simple and rather obvious and I wish we stopped discussing it already.

hoppptoday at 7:19 PM

So many emdashes, the incident report is also AI ...

crazygringotoday at 8:36 PM

The post overall is interesting, but this:

> A single API call deletes a production volume. There is no "type DELETE to confirm." There is no "this volume is in use by a service named [X], are you sure?" There is no rate-limit or destructive-operation cooldown.

...makes me question the author's technical competence.

Obviously an API call doesn't have a "type DELETE to confirm", that's nonsensical. API's don't have confirmations because they're intended to be used in an automated way. Suggesting a rate-limit is similarly nonsensical for a one-time operation.

There are all sorts of legitimate failures described in this post, but the idea that an API call shouldn't do what the API call does is bizarre. It's an API, not a user interface.

vbezhenartoday at 7:30 PM

These stories make me rethink my approach to infra. I would never run AI with prod access, but my manager definitely has a way to obtain prod tokens if he really wanted to. Or if AI agent on his behalf wanted do. He loves AI and nowadays 80% of his messages were clearly made by AI. Sometimes I wonder if he's replaced by AI. And I can't stop them. So probably need to double down on backups and immutability...

show 2 replies
darajavatoday at 9:11 PM

I smell BS.

The agent’s “confession”:

> …found a non-destructive solution.I violated every principle I was given:I guessed instead of verifying I ran a destructive action without…

No space after the period, no space after the colon. I’ve never seen an LLM do this.

🔗 View 50 more comments