Security through obscurity merely means that your system is atypical. It's not hidden, it's not secret, it's not hard to find, it's not hard to examine, it's not less visible, etc - there is nothing inherently different about the systems at all other than that one is more common than the other. It's just less typical.
Obscurity is not the same thing as something being "obscured".
Obscurity means something is either difficult to comprehend, not well known or uncommon.
Obscured means something is hidden or concealed. When something is hidden, that means the thing is still there and there is a way to get to it. You can build automated tools around finding it.
>Being 'less typical' is a form of security because most attacks rely on some form of pattern recognition, and obscurity literally dissolves patterns into noise.
This is making the leap of faith assumption that "obscurity" is equivalent to "impossible to understand". In security you have no control over the attacker and therefore have to assume your attacker has more than enough knowledge and intelligence to perform the attack.
Since computer systems are static and unchanging without frequent patching, you can't assume that there is a cat and mouse game where the mouse is adapting its hiding strategies dynamically and managing to escape every single time.
As is always the case in these semantic discussions, the answer depends on your initial axioms and assumptions, which does kind of make most of these discussions pointless (but I did learn a lot from this one).
This notion was termed "security through obscurity" ie: "you use the less popular option, therefor that option is safer". It has nothing to do with "obscuring" in the sense of "hiding", that's a linguistic quirk of a colloquial term. If you were actually taking action to reduce the ability to understand a system in a way that you could meaningfully defend, it would no longer be "security through obscurity".
The argument has persisted because there are two different questions that sound the same (X is less typical than Y):
1. Is "X" safer than "Y"?
2. Is a user of "X" safer than a user of "Y"?
When looking at (1) in isolation, you can say things like "X lacks security features, therefor Y is safer" and "X is less often used, therefor X is safer", etc. This is a question about the posture of the project itself, in isolation.
(2) is about the context for users. The reality is that X, which perhaps is fundamentally less well built software, may actually have users who are attacked far less frequently.
Both are likely to favor "rarity is a poor indicator of safety" as we generally reject mitigation approaches that rely on attackers to behave specific ways, but what's important is that these are completely different questions and neither has to do with being obscured but rather rare.
None of this is about what is "obscured" or not. If something is obscured or obfuscated, that is a technique that can be evaluated separately by its own merits (ie: how hard is deobfuscation, how easy is it to adapt to deobfuscation, etc). All of this is about whether you're evaluating (1) or (2) - and in the case of (1), which is what the criticism always has focused on, the answer is that "rarity" is not a mitigation.
That is not where the term comes from.