Everything to do with LLM prompts reminds me of people doing regexes to try and sanitise input against SQL injections a few decades ago, just papering over the flaw but without any guarantees.
It's weird seeing people just adding a few more "REALLY REALLY REALLY REALLY DON'T DO THAT" to the prompt and hoping, to me it's just an unacceptable risk, and any system using these needs to treat the entire LLM as untrusted the second you put any user input into the prompt.
I have been saying this for a while, the issue is there's no good way to do LLM structured queries yet.
There was an attempt to make a separate system prompt buffer, but it didn't work out and people want longer general contexts but I suspect we will end up back at something like this soon.
The real issue is expecting an LLM to be deterministic when it's not.
I like the Dark Souls model for user input - messages. https://darksouls.fandom.com/wiki/Messages Premeditated words and sentence structure. With that there is no need for moderation or anti-abuse mechanics. Not saying this is 100% applicable here. But for their use case it's a good solution.
It's less about security in my view, because as you say, you'd want to ensure safety using proper sandboxing and access controls instead.
It hinders the effectiveness of the model. Or at least I'm pretty sure it getting high on its own supply (in this specific unintended way) is not doing it any favors, even ignoring security.
I tried to get GPT to talk like a regular guy yesterday. It was impossible for it to maintain adherence. It kept defaulting back to markdown and bullet points, after the first message. (Funny cause it scores highest on the instruction following benchmarks.)
Might seem trivial but if it can't even do a basic style prompt... how are you supposed to trust it with anything serious?
Before 2023 I thought the way Star Trek portrayed humans fiddling with tech and not understanding any side effects was fiction.
After 2023 I realized that's exactly how it's going to turn out.
I just wish those self proclaimed AI engineers would go the extra mile and reimplement older models like RNNs, LSTMs, GRUs, DNCs and then go on to Transformers (or the Attention is all you need paper). This way they would understand much better what the limitations of the encoding tricks are, and why those side effects keep appearing.
But yeah, here we are, humans vibing with tech they don't understand.
Honestly I try to treat all my projects as sandboxes, give the agents full autonomy for file actions in their folders. Just ask them to commit every chunk of related changes so we can always go back — and sync with remote right after they commit. If you want to be more pedantic, disable force push on the branch and let the LLMs make mistakes.
But what we can’t afford to do is to leave the agents unsupervised. You can never tell when they’ll start acting drunk and do something stupid and unthinkable. Also you absolutely need to do a routine deep audits of random features in your projects, and often you’ll be surprised to discover some awkward (mis)interpretation of instructions despite having a solid test coverage (with all tests passing)!
It somehow feels worse than regexes. At least you can see the flaws before it happens
Modern LLMs do a great job of following instructions, especially when it comes to conflict between instructions from the prompter and attempts to hijack it in retrieval. Claude's models will even call out prompt injection attempts.
Right up until it bumps into the context window and compacts. Then it's up to how well the interface manages carrying important context through compaction.
We used to be engineers, now we are beggars pleading for the computer to work
I'm reminded of Asimov'sThree Laws of Robotics [1]. It's a nice idea but it immediately comes up against Godel's incompleteness theorems [2]. Formal proofs have limits in software but what robots (or, now, LLMs) are doing is so general that I think there's no way to guarantee limits to what the LLM can do. In short, it's a security nightmare (like you say).
[1]: https://en.wikipedia.org/wiki/Three_Laws_of_Robotics
[2]: https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_...
The principal security problem of LLMs is that there is no architectural boundary between data and control paths.
But this combination of data and control into a single, flexible data stream is also the defining strength of a LLM, so it can’t be taken away without also taking away the benefits.