What is that massive semantic difference? If you want the number represented by 1e999 as the value for salary, at some point, something has to take "1e999", whether you call it a string or a something-with-no-type, and turn it into a number. Your deserializer has to know to do that in either case.
How does the [deserializer] step in the XML example know to call into [bignum], and why can't the [json reader] in the JSON example have that knowledge in the same fashion?
Because the XML document has a semantic meaning that is specifically designed for this application. It may even have a schema definition document which formally defines what types to expect. JSON, by contrast, has type definitions imposed on it by its nature as JavaScript code.