I know two sort-of examples of this. Firstly, there’s MS-DOS long file names. That’s a hack (in the positive sense of the word) that gives code that doesn’t know about long file names the 8.3 file names that it expects.
It works, but code that isn’t aware of long file names will only write 8.3 ones, so even a simple file copy using an old copy tool will drop long file names.
Secondly, there’s macOS. It has a minor thing with directory separators. The Unix layer thinks ‘/‘ is the directory separator, old Mac code thinks it’s ‘:’. That works relatively well, mostly because old style Mac code doesn’t use file paths in the UI.
And of course, every desktop OS has to deal with it when it mounts various drives that handle file names differently. HFS+ used NFD normalization, most other file systems that normalize names use NFC, disks may be case preserving or not, case insensitive or not, normalization insensitive or not, etc.
On Unix-like OSes a path /a/b/c/d/e/f can walk six different file systems, each with different rules (even without soft or hard links). It wouldn’t surprise me to find bugs in programs there.