The principal security problem of LLMs is that there is no architectural boundary between data and control paths.
But this combination of data and control into a single, flexible data stream is also the defining strength of a LLM, so it can’t be taken away without also taking away the benefits.
The "S" in "LLM" is for "Security".
As the article says: this doesn’t necessarily appear to be a problem in the LLM, it’s a problem in Claude code. Claude code seems to leave it up to the LLM to determine what messages came from who, but it doesn’t have to do that.
There is a deterministic architectural boundary between data and control in Claude code, even if there isn’t in Claude.
I don't see why the transformer architecture can't be designed and trained with separate inputs for control data and content data.
It’s easier not to have that separation, just like it was easier not to separate them before LLMs. This is architectural stuff that just hasn’t been figured out yet.
"The principal security problem of von Neumann architecture is that there is no architectural boundary between data and control paths"
We've chosen to travel that road a long time ago, because the price of admission seemed worth it.
This was a problem with early telephone lines which was easy to exploit (see Woz & Jobs Blue Box). It got solved by separating the voice and control pane via SS7. Maybe LLMs need this separation as well