> We TOLD you this dynamic web stuff was a mistake. Static HTML never had injection attacks.
Your comparison is useful but wrong. I was online in 99 and the 00s when SQL injection was common, and we were telling people to stop using string interpolation for SQL! Parameterized SQL was right there!
We have all of the tools to prevent these agentic security vulnerabilities, but just like with SQL injection too many people just don't care. There's a race on, and security always loses when there's a race.
The greatest irony is that this time the race was started by the one organization expressly founded with security/alignment/openness in mind, OpenAI, who immediately gave up their mission in favor of power and money.
> We have all of the tools to prevent these agentic security vulnerabilities
We absolutely do not have that. The main issue is that we are using the same channel for both data and control. Until we can separate those with a hard boundary, we do not have tools to solve this. We can find mitigations (that camel library/paper, various back and forth between models, train guardrail models, etc) but it will never be "solved".
> Parameterized SQL was right there!
That difference just makes the current situation even dumber, in terms of people building in castles on quicksand and hoping they can magically fix the architectural problems later.
> We have all the tools to prevent these agentic security vulnerabilities
We really don't, not in the same way that parameterized queries prevented SQL injection. There is LLM equivalent for that today, and nobody's figured out how to have it.
Instead, the secure alternative is "don't even use an LLM for this part".
> We have all of the tools to prevent these agentic security vulnerabilities
I don't think we do? Not generally, not at scale. The best we can do is capabilities/permissions but that relies on the end-user getting it perfectly right, which we already know is a fools errand in security...
You are describing the HN that I want it to be. Current comments here demonstrates my version sadly.
And, Solving this vulnerabilities requires human intervention at this point, along with great tooling. Even if the second part exists, first part will continue to be a problem. Either you need to prevent external input, or need to manually approve outside connection. This is not something that I expect people that Claude Cowork targets to do without any errors.
> We have all of the tools to prevent these agentic security vulnerabilities,
We do? What is the tool to prevent prompt injection?
> We have all of the tools to prevent these agentic security vulnerabilities
How?
> We have all of the tools to prevent these agentic security vulnerabilities,
Do we really? My understanding is you can "parameterize" your agentic tools but ultimately it's all in the prompt as a giant blob and there is nothing guaranteeing the LLM won't interpret that as part of the instructions or whatever.
The problem isn't the agents, its the underlying technology. But I've no clue if anyone is working on that problem, it seems fundamentally difficult given what it does.