I see your point, but I don't agree about why XHTML failed. For starters, see: https://en.wikipedia.org/wiki/WHATWG (Basically, XHTML failed because it was a pointless boondoggle, whereas HTML5 very much wasn't.)
Regarding binary integers, having written code for a few common binary protocols and file formats I've never had to think very hard about it (just: How long? Which endian? Signed?) but maybe it's different for older or more esoteric stuff.
Re integers, it’s not the esoteric stuff, it’s the flexible, supposedly universal stuff: there’s like half a dozen varieties of varints across MessagePack, CBOR, Protobufs, ASN.1 *ER, etc.; even UTF-8 is just a (limited-range) varint encoding from a certain point of view. “Zigzag encoding” (using the least significant bit as the sign bit) is particularly insidious. And note that the (integer) exponent in IEEE floating-point formats is signed but not two’s complement: it’s in a biased representation instead.
Er, no, that’s not what I was referring to. The XHTML 2 story was stupid, yes (though I think the RDF / “Linked Data” tooling could’ve been really nice had it not been a fantasy), but lots and lots of people were willing to give XHTML 1.1 a chance during the XML craze and the original web standards push; except the HTML 4.01 Strict rules which XHTML 1.1 enforced were complicated enough that nobody ended up willing to tolerate showing the user literally nothing for every fumble in a server-side script. (Part of the problem was that people were routinely generating markup from textual templates.)