I don't think I see a problem.
The set of floating point values is intentionally biased towards smaller numbers. So yes, while few people have a need to deal with such large numbers, there are also far fewer such large numbers.
I think your flaw is that you are looking at the total set of all possible floating point numbers, and seeing a chunk that few people will use. Don't do that. Look at the 63 bits, and point out which ones you would like to remove/compress/etc. Yes, the combination of having an exponent of all 1's is rare, but none of those bits individually is "rare". The MSB in the exponent is used to represent all the numbers from 1 to 2, for example.
I don't doubt one can come up with a clever scheme that provides a different, encoding which is even more heavily biased towards more "reasonable" numbers, but it's not clear what the gain would be. You'd have to come up with novel algorithms for all the floating point operations (addition, division, etc) - and would they be as fast as the current ones?
I've yet to find real world problems for which the current encoding is pretty poor. Contrived ones, sure - but real problems? Rare.