A 32 bit float and only represent 2^32 numbers (maybe less), same as a 32 bit int.
What’s happening here is using a short (16 bit) rather than a byte (8 bit), although not quite to the same extent, for each channel. HDR allows you to represent 2^6/12/18 times as many colors as SDR.
HDR10 is 10-bits per component and there are plenty of 16-bit images that aren't HDR, for example: http://i.imgur.com/Wm2kSxd.png -- the main thing about using HDR is to increase the range of values, i.e. what the biggest and smallest values are, not merely how many distinct values you can represent.