I don't think "obscurity" really buys you much (especially these days, with LLMs).
However "Not Having Stuff to Steal" works like a charm. It's thousands of years old, and has never gone out of style.
I know that it's considered blasphemy, hereabouts, but I've found that not collecting information that I don't absolutely need is pretty effective.
Even if someone knocks down all my gates and fences, they'll find the fox wasn't worth the chase.
It does make stuff like compiling metrics more of a pain, but that's my problem; not my users'.
That's fine if the goal of breaking in is immediate theft; it might also be more along the lines of leaving something behind.
Totally agreed, to me data is just like code: extremely valuable for the functionality it provides, but in most other ways a serious liability. That said:
> I don't think "obscurity" really buys you much (especially these days, with LLMs).
Actually I think it does so even more with LLMs. As has been posited before (particularly on the threads about open source projects going closed source) security comes down to who has paid more attention to the code, the attacker or the defender. And of course, these days attention is measured in tokens.
We know that LLM's are pretty capable of reversing-engineering to figure out an application's logic, but I would bet it takes many more tokens than reading the code or other public information directly. As such, obscurity adds an important layer to security: increasing the costs on the attacker.
Security has always been a numbers game, but now the numbers will overwhemingly be tokens and scale. If the defenders can cheaply raise the costs on the attackers by adding simple layers of obscurity, it can act as a significant deterrent at scale. I wonder if we'll even see new obfuscation techniques that are cheap to implement but targeted specifically at LLMs...