> Using a commercially available 28-nanometer ASIC process technology, we have profiled (8, 1, 5, 5, 7) log ELMA as 0.96x the power of int8/32 multiply-add for a standalone processing element (PE).
> Extended to 16 bits this method uses 0.59x the power and 0.68x the area of IEEE 754 half-precision FMA
In other words, interesting but not earth shattering. Great to see people working in this area though!
Latency is also a lot less than IEEE or posit floating point FMA (not in the paper, but the results were only at 500 MHz because the float FMA couldn't meet timing closure at 750 MHz or higher in a single cycle, and the paper had to be pretty short with a deadline, so couldn't explore the whole frontier and show 1 cycle vs 2 cycle vs N cycle pipelined implementations).
The floating point tapering trick applied on top of this can help with the primary chip power problem, which is moving bits around, so you can solve more problems with a smaller word size because your encoding matches your data distribution better. Posits are a partial but not complete answer to this problem if you are willing to spend more area/energy on the encoding/decoding (I have a short mention about a learned encoding on this matter).
A floating point implementation that is more efficient than typical integer math but in which one can still do lots of interesting work is very useful too (providing an alternative for cases where you are tempted to use a wider bit width fixed point representation for dynamic range, or a 16+ bit floating point format).
Interesting times...
[0] https://www.synopsys.com/dw/ipdir.php?ds=asip-designer [1] https://en.wikipedia.org/wiki/LISA_(Language_for_Instruction...
This is independent of any kind of posit or other encoding issue (i.e. it has nothing to do with posits).
(I'm the author)
Do you think there might be an analytic trick that you could use for higher size ELMA numbers that yields semiaccurate results for machine learning purposes? Although to be honest I still think with a kuslich FMA and an extra operation for fused exponent add (softmax e.g.) you can cover most things you'll need 32 bits for with 8
google announced something along these lines at their AI conference last september and released the video today on youtube. here's the link to the segment where their approach is discussed: https://www.youtube.com/watch?v=ot4RWfGTtOg&t=330s
It's been a number of years since I've implemented low-level arithmetic, but when you use fixed point, don't you usually choose a power of 2? I don't see why you'd need multiplication/division instead of bit shifters.
But when multiplying two arbitrary floating point numbers, your typical case is multiplying base-2 numbers not powers of 2, like 1.01110110 by 1.10010101, which requires a real multiplier.
General floating point addition, multiplication and division thus require fixed-point adders, multipliers and dividers on the significands.