I'm aware - I'm merely responding to the previous commenter's point about how the compression algorithm is "starting off with a bit mapped image that your brain happens to interpret as the number 17", and pointing out that if this
were the case, the likely outcome should be a fuzzier-looking "17" and not a "21".
Clearly, the compression algorithm is designed around human perception (i.e. looking for visually-similar segments to, I assume, tokenize), and therefore does relate to the actual semantics of the document, albeit in a coarse and mechanical way. It did know enough to replace character glyphs with other character glyphs, but didn't know enough to choose the right ones.
My point is that it's not coincidental at all - this algorithm is obviously in a sort of "uncanny valley" in its attempt to model human visual perception.