logoalt Hacker News

Ramp's Sheets AI Exfiltrates Financials

87 pointsby takiratoday at 5:44 PM28 commentsview on HN

Comments

Mr-Frogtoday at 6:35 PM

It's kinda awesome that after decades of software and hardware advancements to prevent computers from arbitrarily executing data as instructions, we've decided to let agents arbitrarily execute data as instructions.

show 4 replies
vicchenaitoday at 9:53 PM

The real issue for fintech specifically is that exfiltrating financial data is a much bigger deal than leaking your todo list. Ramp handles corporate spend data. That is the last place you want prompt injection to be a known risk for months.

carlyaitoday at 6:33 PM

"The PromptArmor Threat Intel Team responsibly disclosed this vulnerability to Ramp. Ramp's security team indicated that the issue was resolved on May 16, 2026." I think they mean March here

show 1 reply
pentagramatoday at 9:18 PM

Concidentially, today I was watching and interview with a lead designer from Ramp who is telling about how they are full ia, agents and automation https://youtu.be/KPDXMtmkcgk

show 1 reply
mcontractoday at 7:03 PM

Find it funny that PromptArmor needed to reach out 3 times in a row to get a nearly month-late response that the issue "was resolved"

sergiomatteitoday at 10:25 PM

Why is Ramp even building a sheets product? That's the question zero that popped up to my head.

renewiltordtoday at 6:29 PM

So we know Claude’s mitigation. What is Ramp’s? Same warning dialog?

It’s funny that this technology only admits in-band signaling. Given that, any foreign content is risky. It’s actually quite interesting that the current technological ecosystem is built around a high trust situation: npm, pip, cargo all run foreign code in the developer context and communities have norms of downloading random people’s modules.

And so I suppose it’s no surprise that we use LLMs - another tech that is high-trust: since it has no out of band signaling ability.

But it seems like we’re very close to the end of the era where someone will use (in a sensitive system) arbitrary web content carrying the equivalent of merged code/data.

ragalltoday at 8:48 PM

I once read about the signalling view of advertising, meaning it's used to show that a company is so prosperous that it can afford spending a lot of money in advertising. In the same way, I think from now on, as much as possible, I'll only buy from companies that will publicly make it a point not to use AI internally. AI use should brand companies as desperate and unreliable.

bpt3today at 6:54 PM

What about this is a vulnerability, let alone one that requires responsible disclosure?

Untrusted data sources can provide data that causes bad things to occur. If that's a vulnerability, then any application that ingests data is riddled with vulnerabilities.

I agree that the behavior should change from a default of allowing external network requests to denying them, but this "report" reads like overly dramatic marketing BS.

show 3 replies