> One of those unprotected endpoints wrote user search queries to the database. The values were safely parameterised, but the JSON keys — the field names — were concatenated directly into SQL.
I was expecting prompt injection, but in this case it was just good ol' fashioned SQL injection, possible only due to the naivety of the LLM which wrote McKinsey's AI platform.
Yeah, gotta admit I'm a bit disappointed here. This was a run-of-the-mill SQL injection, albeit one discovered by a vulnerability scanning LLM agent.
I thought we might finally have a high profile prompt injection attack against a name-brand company we could point people to.
The tacit knowledge to put oauth2-proxy in front of anything deployed on the Internet will nonetheless earn me $0 this year, while Anthropic will make billions.
[dead]
I just wonder how much professional grade code written by LLMs, "reviewed" by devs, and commited that made similar or worse mistakes. A funny consequence of the AI boom, especially in coding, is the eventual rise in need for security researchers.