I don't think "obscurity" really buys you much (especially these days, with LLMs).
However "Not Having Stuff to Steal" works like a charm. It's thousands of years old, and has never gone out of style.
I know that it's considered blasphemy, hereabouts, but I've found that not collecting information that I don't absolutely need is pretty effective.
Even if someone knocks down all my gates and fences, they'll find the fox wasn't worth the chase.
It does make stuff like compiling metrics more of a pain, but that's my problem; not my users'.
> Security ONLY through obscurity is bad (Kerckhoffs's Principle).
This is the crux of the article.
(1) Kerckhoffs's Principle doesn’t say that. It says to design the system AS IF the adversary has all of the info about it except the secrets (encryption key, certificates, etc).
(2) this rule is okay if you are a solo maintainer of a WordPress installation. It’s a problem if you work at a large company and part of the company knows the full intent of this, while the rest of the company doesn’t know the other layers of security BECAUSE of the obscurity layer. In this way, it’s important to communicate that this is only a layer and shouldn’t replace any other security decisions.
Obscurity is not security.
But it can add a bit of delay to someone breaking actual security, so maybe they'll hit the next target first as that is a touch easier. Though with the increasing automation of hole detection and exploitation, even that might stop being the case if it hasn't already.
The biggest problem with obscurity measures IMO is psychological: people tend to assume that the measures⁰ are far more effective than they actually are, so they might make less effort to verify that the proper security is done properly.
----
[0] like moving SSHd to a non-standard port¹
[1] a solution that can inconvenience your users more than attackers, and historically (in combination with exploiting a couple of bugs) actually made certain local non-root credential scanning attacks possible if you chose a high port
The problem with this argument is that you can justify an infinite amount of crap with it, the security equivalent of cockroach papers; which inevitably people ends up treating as real security.
One example I remember is Pidgin storing its passwords in plain text in $HOME. They could have encrypted them with some hardcoded string, and made a lot of people happy that they would no longer grep their $HOME and find their passwords right there. However this had the side effect that now people were dropping the ball and sharing their config files with others. Or forgetting to setup proper permissions for their $HOME, etc.
In addition, these layers of obscurity are also not overhead free: they may complicate debugging, hey may introduce dangerous dependencies, they may tie you to a vendor, they may reduce computing freedom (e.g. Secure Boot), etc.
Security through obscurity is NOT bad.
Security ONLY through obscurity is bad (Kerckhoffs's Principle).
Security through obscurity, as an additional layer, is good!
I've been saying this ever since that phrase was coined. A layer or two of obscurity keeps a lot of noise out of logs, reduces alert fatigue and cuts down on storage costs especially if one is using Splunk as their SIEM and makes targeted attacks much easier to detect. I will keep it.
“Security through obscurity” has the connotation that it is the obscurity that achieves the security - which is bad.
”Security including obscurity“ is fine.
Security through obscurity is bad. Security AND obscurity is fine. There's a very clear distinction here.
In a corporate setting my experience is that it is rarely worth it to add any obscurity on top of security. Your biggest challenge is getting peoples time and resources, and you need to use that time to implement security controls. A secondary objective you have is to build security culture over time and teach people too see patterns where more security is needed, so it is important to select what to teach to get maximum impact.
Everybody trying to discuss this gets the framing wrong. Obscurity isn't "bad" or "good". It's not "not" security. Security in the real-world sense is about risk. It fixes an adversary and then applies costs to them. Obscurity changes costs (usually by raising them for the adversary).
Depending on the setting and the adversary, obscurity measures can raise costs by a material or immaterial amount.
Obscurity measures usually also impose costs on defenders (and, transitively, on the intended users of the system). Those costs are different than they are for adversaries (usually: substantially lower). They might or might not be material.
Your general goal is to asymmetrically raise costs on the adversary.
Seen that way, it's usually pretty easy to reason about whether obscurity is worth pursuing or not. Don't do it if it doesn't materially raise costs for attackers, or, even if it does, if it doesn't raise costs way less for defenders and users.
What trips people up in forums like this is that we're used to dealing with security problems framed in settings where we can impose \infty costs on attackers: foreclosing all known avenues of attack (to something like a mathematical certainty, and stipulating that computer science discoveries may change the cost function tomorrow). In those settings, all obscurity measures have relatively immaterial attacker costs associated. But it's still the same underlying problem! And, in the real world, we're actually rarely operating in model situations where we really can impose \infty costs on attackers.
Regarding Counterstrike (game) example, there were already a lot of cheaters and a cheater ecosystem that still exists to this day. I suspect Valve could address it if it wanted to, but the gameplay/development cost trade-offs aren't enough.
Valve pivoted to server-side anti-cheat and toleration because someone probably did the math on max(profit) with lootboxes.
If the obscurity it is only an additional layer on top of a secure system, it is called "defense in depth".
It's a simple probability calculation. If some automated scanning tools can't find your service, a lot of attackers will never know of its existence. So even if it has an unpatched vulnerability, they won't attack it.
If 1000 attackers find the vulnerable system, the probability is high at least one is attacking it. If it's only or two one who find it, they might just ignore your system, because they found thousands of others they randomly chose first.
reCAPTCHA is a great success story of security through obscurity because probably less than 100 people have reverse engineered it and much less than that have produced a working solver for it that doesn't require a headless browser. Snapchat would be another good example - almost no one is going to put in the work to understand this [0]. Most companies just half ass it though and accordingly achieve nothing with the obscurity at all besides worse performance.
[0] https://web.archive.org/web/20201128060507/https://hot3eed.g...
I get what this post is saying, but I'm going to push back that "security through obscurity" isn't just something that people parrot without understanding.
Obscurity provides, effectively, no security. There may be other benefits to the obscurity, but considering the obscurity a layer of your security is bad. I hope we all agree that moving telnet to another port provides no security (it's easily sniffable, easily fingerprintable).
If it provides another benefit, use it, but don't think there's any security in it.
For ~30 years I've moved my ssh to a non-standard port. It quiets down the logs nicely, people aren't always knocking on the door. But it's not a component of my security: I still disable password auth, disable root login, and only use ssh keys for access. But considering it security is undeniably bad.
One thing I like about some layer of obscurity is not so much anyone directly attacking you, it's someone generically attacking you because you happened to use a common thing that someone finds a security hole with.
I remember when port knocking was discussed here on HN many years ago it was shit upon because people said security through obscurity is bad. What really frustrated me, at the time and still (when people shit on it), is that it's not just obscurity, it's also security. Port scans see nothing, but just knowing the port doesn't give you anything. You still need a password or key.
The problem with security through obscurity (even if it’s just an “addon”) is that it pollutes your code base, system. It’s just not worth it.
Like moving ssh to a different port. If you are the only one working on it, sure fine, as long as you remember the port. If you re working with others, then everyone needs to know the new port, so it has to be documented somehow. It’s a pita
I am the Modern Man (Secret, secret, I've got a secret)
Who hides behind a mask (Secret, secret, I've got a secret)
So no one else can see (Secret, secret, I've got a secret)
My true identityThe problem with obscurity is that it breeds complacency; the implementer is often uninformed or assumes it protects everything else.
Security which has layers of obscurity can be incredibly powerful especially if you believe in counter intelligence. You want attackers to find the wrong key sometimes because it will lead to you collecting intelligence on them. But this increases the cost in time and infrastructure.
This was largely true before. But AI reduces the cost of comprehension and finding vulnerabilities en-masse to zero, so this no longer holds, and I’m increasingly convinced that hiding in noise and complexity is no longer a valid strategy. But AI symmetrically makes it easier to secure your system so it’s not like all hope is lost even if the transition period will be brutal.
I wrote a blog about this: https://tanyaverma.sh/2026/03/01/nowhere-to-hide.html
Kerchkoff would beg to disagree. Please do not refuse a beggar: https://en.wikipedia.org/wiki/Kerckhoffs%27s_principle
When a firewall rule blocks a port, is that security or obscurity?
Besides fangs and claws, a substantial amount of both predators and non-predators have highly evolved camouflage.
If the Mythos era isn't just hype marketing, "sec-scurity" might not be a valid strategy anymore. If you're taking a beat because you're small and irrelevant, you could still be massively fucked over a breach.
Mom & Pop code shops might be high risk if nation-state level vulnerability-exploitation becomes economically viable to any disgruntled prick.
Couldn't one argue that a password is also obscurity? It's only secure until someone figures it out, just like a secret URL on a website.
Security through obscurity is the same thing as security through praying. A stopped clock is right twice a day.
obscurity is a form of concealment, but not cover.
concealment will make specific targeting -less than straightforward,but a scorched earth obliteration will get you along with all else.
cover, is a condition that is resistant to attack when you are visible.
you should have both, resistance to sequential action when you are specificly targeted, a obfusification of presence, minimizing the frequency of targeting.
I used to joke that we had "Security through Stupidity" - everything was so half-baked that no intruder would think we were being serious.
I want to add that Obscurity is ambiguous. Is changing the port of SSH "obscurity"? Some may say yes, because you could find it by bruteforce. But a password with infinite attempts can also be bruteforced. Here, the defining factor of security is maximum number of attempts (either on ports, username or whatever).
I could see AI massively changing the calculus here. Its ability to hack and reverse-engineer (even obfuscated) artifacts may leave obscurity (read: not sharing code or binaries at all) as the primary security mechanism in the industry.
Reducing the attack volume seems like a good idea in any case.
That's why forcing people to use E-mail addresses as user IDs is stupid.
My take: Do proper security, but if you are short on time or resources, you can start with security through obscurity, to block a few percentage of attacks, and then when you have time and resources, go ahead and add the proper security measures.
Security through Obscurity used to work, with AI it absolutely does not.
I've been saying for years, it's one layer of security. That's undeniable.
Yeah, security through obscurity as part of securing a system is good. Security through obscurity as the only way of securing a system is not.
Like, a lot of it comes down to 'high friction' vs 'low friction'. Obscurity means high friction. It means that the attacker needs to craft a specific solution for your site or system in particular rather than relying on an off-the-shelf solution to handle it all for them.
For example, the article's point about changing the WordPress database prefix fits into this category perfectly. Does it really make things that much more 'secure'? No, of course not. But it does mean that automated scripts that just assume tables like wp_posts exist will fail. It means that an attacker can't just run any old WordPress hacking toolkit and watch it do its thing, they have to figure out what database prefix you're using first.
Same with antispam solutions. The best solution to stop spam is to make your site unique in some way. To add some sort of challenge that a new user has to overcome to use the site, like a question related to the topic, a honeypot field they can't fill in, a script that detects how quickly they register, etc.
This won't stop a determined spammer, but it will stop or delay bots and automated scripts that rely on the target system having the same behaviour across the board. The spammer has to specifically target your site in particular, not just every forum script running the same software.
And much of society works this way to a degree. A federated or decentralised system (whether a social network or political movement) isn't technically harder to attack than a centralised one might be.
But it is more work to attack it. If a government or company wants to censor Reddit or Discord or YouTube, they have one target they can force to censor information across the board. If they want to target the Fediverse or some sort of torrent based system, then they have to track down dozens of people and deal with at least some of those people refusing or taking it to court or being in countries that aren't under their control or whatever else.
That's kinda what a good security through obscurity setup can be. You can't mass target everyone at once, you have to target different systems individually and spend more time and resources in the process.
However, you still need real security measures there. Security through obscurity is like hiding a safe behind a painting. It'll stop casual attackers from finding it, but it won't stop a targeted attack on its own. You need a strong lock, materials that are difficult to drill through and the safe itself being difficult to remove from the wall too.
Obscurity should always be part of security
Just because you have a bunker, doesn’t mean you hand the enemy the plans.
It's useless for the example given because obfuscating JavaScript as protection no longer has any purpose, if you can let AI analyze the code, and/or in this case the API requests.
I recently did use a variation of this type of security to prevent a malicious user misusing our services... But I made a not to me an everyone else it was just a quick fix not guaranteed to work long term.
I have always replied to colleagues who poohpoohed "security through obscurity!" as if it was proof of ignorance or bad culture with "a password is just a string of obscure characters. ;-)"
That's not a serious argument, of course. But consider how the spooks operate in the field. They employ all manner of obscure practices in an attempt to improve their security. Their intentional obscurity (AFAIK) is never allowed to unnecessarily complicate operational practices, which would introduce risk. And they've probably got a lot more theory and no-BS field testing behind their practices than we do.
Maybe we should ask them for advice?
Cryptography is "just" a mathematically sophisticated version of manufacturing obscurity, so that's missing the point a bit. Obscurity is just information asymmetry, which is the only way we have to "secure" / anchor anything. That quote is about all the other forms of manufactured obscurity not being anywhere near as rigorous, which should be obvious.
Wordpress is a great example. He cites
> There is a long-standing security recommendation to change WordPress's default database table prefix to a random one. For example, wp_users becomes wp_8df7b8_users. This is often dismissed as "worthless" because it is security through obscurity.
I found that just changing the default URL for the wordpress login from the usual wp-admin to anything reduces by several orders of magnitude the number of scripts that try your site for the most common vulnerabilities---something that happens constantly for any site on the web, once a minute or so.
Mo Beigi unfortunately misses the point.
Yes, echo chambers are annoying - I remember this when I challenged them by explaining to me why being superuser is problematic (hint: I countered their arguments easily, then they got very angry about this; I did this on several IRC channels back in the day, just to prove a point. I managed to get banned on one too in the process.)
But ... obscurity is NOT a security technique. It just has a catchy slogan.
The primary reason why javascript is sometimes - or often - obfuscated is to make it harder to copy/paste and re-use stuff. That's it. Even with sanitizers, de-obfuscating it tends to increase the amount of time one has to spend to uncripple the code. This is the primary function; anything else is just decoy for the most part here.
> Security through obscurity is the practice of reducing exposure by keeping an application's inner workings or implementation details less visible to attackers
Very clearly his attempt to explain it, is already biased. Is obfuscating JavaScript security through obscurity? I mean if we can not agree to the terms, we can't agree or disagree on anything that follows.
Showing fancy images does not add any real argument to the discussion.
> For example, wp_users becomes wp_8df7b8_users. This is often dismissed as "worthless" because it is security through obscurity.
Note that this example does not even follow his own (!!!) definition.
This has nothing to do with obscurity. It simply is another name than the default login name. What would he expect of people to do? Retain the name? And if they change it, are ALL changes in his opinion valid to "security through obscurity"? He picked wp_8df7b8_users here. Is the name "foobar" instead a better name? Or is it "not obscure enough"?
[flagged]
Obscurity can be fine but it's not security. I think of it like cover and concealment in the military. Security is cover. Something you can get behind so the bullets don't hit you. Obscurity is concealment. Harder to see, harder to find, so the enemy doesn't know where to shoot, but it's not stopping any bullets. Both have advantages and disadvantages and can complement each other depending on how they're used.