Everybody trying to discuss this gets the framing wrong. Obscurity isn't "bad" or "good". It's not "not" security. Security in the real-world sense is about risk. It fixes an adversary and then applies costs to them. Obscurity changes costs (usually by raising them for the adversary).
Depending on the setting and the adversary, obscurity measures can raise costs by a material or immaterial amount.
Obscurity measures usually also impose costs on defenders (and, transitively, on the intended users of the system). Those costs are different than they are for adversaries (usually: substantially lower). They might or might not be material.
Your general goal is to asymmetrically raise costs on the adversary.
Seen that way, it's usually pretty easy to reason about whether obscurity is worth pursuing or not. Don't do it if it doesn't materially raise costs for attackers, or, even if it does, if it doesn't raise costs way less for defenders and users.
What trips people up in forums like this is that we're used to dealing with security problems framed in settings where we can impose \infty costs on attackers: foreclosing all known avenues of attack (to something like a mathematical certainty, and stipulating that computer science discoveries may change the cost function tomorrow). In those settings, all obscurity measures have relatively immaterial attacker costs associated. But it's still the same underlying problem! And, in the real world, we're actually rarely operating in model situations where we really can impose \infty costs on attackers.