Let 10X = 9.9. Then 10X - X = 9.9 - 0.9 = 9 = 9X. Hence X = 1, but X is actually 0.99 in this case (not 0.9). You need 0.99 = 0.9 for this to work with the exact same structure as your version.
Your proof only works because appending a 9 to an infinite expansion of 9s does not actually add a 9. But at this point you're forced to establish meaning for an infinite expansion of 9s, at which point this is really not just algebra anymore.
This is always wrong except in the case of infinitely many repeated digits, and the proof does not explain this.
More rigorously, let 9.999{n} denote an expansion with n 9s after the decimal point, where n can also be infinity. The subtlety with the argument is that it needs X to be the same as everything after the decimal point (so that the result of the subtraction is just 9). This is never true for finite values of n, and the proof does not establish that it's true for an infinite value of n -- indeed, it can't do so without supplying a meaning in the first place.
Another way of phrasing it is that it assumes that if X = 0.999..., then 10X = 9.999..., where there are the "same number" of 9s after the decimal point in 10X as there are in X. This seems intuitive for an infinite repeating sequence of 9s, because "one less than infinity" is still infinity, but it's not very rigorous, and the argument as written certainly doesn't explain this.