I've thought of that, but the problem is that it needs to linearly interpolate between the more accurate values, and depending upon how finely grained the linear interpolation is, you would need a pretty big fixed point multiplier to do that interpolation accurately.
If you didn't want to interpolate with an accurate slope, and just use a linear interpolation with a slope of 1 (using the approximations 2^x ~= 1+x and log_2(x+1) ~= x for x \in [0, 1)), then there's the issue that I discuss with the LUTs.
In the paper I mention that you need at least one more bit in the linear domain than the log domain (i.e., the `alpha` parameter in the paper is 1 + log significand fractional precision) for the values to be unique (such that log(linear(log_value)) == log_value) because the slope varies significantly from 1, but if you just took the remainder bits and used that as a linear extension with a slope of 1 (i.e., just paste the remainder bits on the end, and `alpha` == log significand fractional precision), then log(linear(log_value)) != log_value everywhere. Whether or not this is a real problem is debatable though, but probably has some effect on numerical stability if you don't preserve the identity.
Based on my tests I'm skeptical about training in 8 bits for general problems even with the exact linear addition; it doesn't work well. If you know what the behavior of the network should be, then you can tweak things enough to make it work (as people can do today with simulated quantization during training, or with int8 quantization for instance), but generally today when someone tries something new and it doesn't work, they tend to blame their architecture rather than the numerical behavior of IEEE 754 binary32 floating point. There are some things even today in ML (e.g., Poincaré embeddings) that can have issues even at 32 bits (in both dynamic range and precision). It would be a lot harder to know what the problem is in 8 bits when everything is under question if you don't know what the outcome should be.
This math type can and should also be used for many more things than neural network inference or training though.