My overall point is that neither your function or my function are actually O(1). Whenever you see the notation O(...), there is an implicit context "As input size n grows arbitrarily, ...". You can check the formal definition on Wikipedia.
The cost function for both our functions is not defined for arbitrary n, because they both stop working when input size crosses a threshold. So the O(1) notation is not well-defined in this case.
Now you could come up with a different formal definition for O(1) for bounded input sizes, which is fine, but I don't think you can find one that makes your function O(1) and my function non-O(1). So it would be not be a meaningful definition in this case.
Ultimately, you're using O(1) colloquially. In your words, calling my function O(1) is misleading while it is fine for yours because the constant is "small". "Small" is a subjective term, while O(1) is a formal term.
If your definition hinges on a subjective characterization, why not just say "it's fast", instead of incorrectly using a technical term?
(If we really want to be pedantic, there is really no such thing as "constant-time" when accessing memory, a TLB miss for example will make the CPU traverse a tree; a page fault can execute arbitrary code).