https://en.wikipedia.org/wiki/Computational_complexity_of_ma...
Also, did you even read the article you linked? The lowest complexity for Multiplication is an open question, absurd to say that it "has the same big-O complexity".
In fact, one can also show that M(n) is O(D(n)) (where D(n) is the time needed to divide n bit numbers). So they really are big-theta of each other.
(See Aho, Hopcroft, and Ullman, The Design and Analysis of Computer Algorithms, 1976, section 8.2)
It is very easy to multiply an integer by itself, not so much to compute the square root of an integer.
It is very easy to multiply 3x3x5, no so much to find the prime factors of 45.
There are some benchmarks here: https://ridiculousfish.com/blog/posts/benchmarking-libdivide...
I noticed it is very fast when I was experimenting with algorithms for unbiased bounded random numbers https://dotat.at/@/2022-04-20-really-divisionless.html - the fancy nearly and really divisionless algorithms had much less advantage on an Apple M1 than on an old Intel CPU.
Multiplication is completely symmetric: shift off the lowest bit (digit), multiply (which is just logical AND for binary), add to the result, shift the result right, and repeat.
The reason multiplication is faster than division isn't about complexity it's about dependency. The division steps depend on the previous iteration, where multiplication can be parallelized.
A/1 divide 0/1 being stored as 1/0 and then propagated around seems reasonable but I keep putting off the bookwork of finding out what arithmetic relations still hold at that point. I think one loses multiply by zero folding to zero, for example A * 0 is now only 0 for A known to have non-zero denominator.
Context is a language with arbitrary precision rationals as the number type, and all I'm really looking for is a way to declare that the type of division is a rational for any rational arguments. Searching for this mostly turns up results on the naturals or integers which I'm not particularly interested in.
Long shot but worth asking. Thanks
Given the usual definition of multiplication (and division as its inverse), this is not possible:
Assume some multiplicative inverse 0⁻¹ of 0 exists. Then, writing · for multiplication, since
a·0 = b·0
for all rational a and b, we have
a·0·0⁻¹ = b·0·0⁻¹
and therefore
a = b
for all rational a and b, which is absurd.
To get an idea of what operations do and don't make sense when extending the rationals in the way you propose, define ∞ as our 0⁻¹ and refer to
https://en.wikipedia.org/wiki/Projectively_extended_real_lin...
substituting ℚ and ℚ̂ for ℝ and ℝ̂.
For a similar construction that maintains ordering properties, see
https://en.wikipedia.org/wiki/Extended_real_number_line#Arit...
Finally, note that the complex number equivalent to your construction is really interesting and useful:
Outside of proof systems/pure math, like in software you write, you can define n/0 to be anything you like, you just can't depend on being able to derive mathematically consistent results.
There are also more exotic math systems. Infinity is not literally a number (it would lead to contradictions), but mathematicians can happily deal with numbers systems augmented with infinity as an extra member -- one just has to step carefully, since it still isn't a number. The "number system" has one non-number member requiring special handling.
Similarly there are modern approaches to infinitesimals where there's a thing "e" which is not zero, but e^2 is zero. That is not normal arithmetic but when handled carefully it can be useful.
https://lawrencecpaulson.github.io/2021/12/01/Undefined.html
https://www.hillelwayne.com/post/divide-by-zero/
https://xenaproject.wordpress.com/2020/07/05/division-by-zer...
You'll need to do some thinking/proof to find out if it works for you.
It has somewhat kicked the can down the road to defining the multiplicative inverse, but that's still a good step forward. Thank you
Classic examples include:
- time domain / frequency domain
- lookup table / functional form
- traditional binary representation / gray codes
The above is also not true of factors of the modulo itself but then that's just a right shift so that part is easy.
eg. Under mod 35 if i want to divide by 3 i can just multiply by 12 since 12x3 = 1 under mod 35.
eg2. Under mod 35 if i want to divide by 11 i can just multiply by 16 since 16x11 = 1 under mod 35.
I can do this for any number that doesn't share factors with 35 since there's always a multiple to that number that will give 1. I could also create base 2 examples similarly. It's called the multiplicative inverse.
Modular division is straightforward to convert to a multiplication; however this is mainly for specialized applications, like cryptography and number theory. It's uncommon to want divide-by-2 to make the value larger.
Ordinary flooring division can also be converted to a multiplication problem when the dividend is bounded. This uses the "magic number" approach where you multiply by a rounded scaled reciprocal, and then do some work to correct the error from rounding.
>It's worth noting that you can generally divide much faster if you know the divisor ahead of time
Division is just multiplication with 1/x so i don't understand
It gets tricky because 2^32/3 is not an integer, so you must round it. This introduces error; the idea is to show that the error is wiped out by the flooring divide. I did the monster math here:
https://ridiculousfish.com/blog/posts/labor-of-division-epis...
Here's an example: https://godbolt.org/z/8vG637fb5
It compiles x/6 to some mad multiplication, bitshift and add.
> If generic floating point division were more important to modern CPU's then it might make sense to dedicate enough silicon area to make it single cycle, however most chip makers have obviously decided that they can make better use of that silicon by using those gates for other things. ..
> CPU manufacturers would only dedicate a large swath of silicon to a low latency floating point divide unit if they couldn't use that silicon more effectively to speed up the CPU in other ways. Generally speaking though, having more long latency FDIVs are a more efficient use of silicon than fewer shorter latency FDIVs
See https://www.ed-thelen.org/comp-hist/CRAY-1-HardRefMan/CRAY-1...
So really, it's tradition that division (which is needed relatively rarely) is not performed quickly in hardware. The old time versus silicon tradeoff strikes again - you can have it quick but it will be expensive, how much do you want it?
Re the original question - (almost) anything can be done in enough silicon. Both multiplication and division could be done as very fast "ROM" lookups, but that would use an enormous amount of silicon.
a+b = a + a(b/a) = a(1 + (b/a))
y = x/5.0
vs. y = 0.2 * x
Is that true?Kind of like recovering the creamer from your coffee, only with symbols.
2. Even if we were discussing exact division, the ratio of two integers is a rational. The rationals are countable. More generally, any image of a countable set is countable, and the cartesian product of two countable sets is countable; there is no way you could get something uncountable from something like this.
3. Aleph_1 is not (necessarily) the same thing as the cardinality of the reals, which is 2^(aleph_0). The statement that these are the same is known as the continuum hypothesis and is undecideable in standard mathematics.
...and, also, none of this really relates to the complexity of performing integer division, even ignoring the fact that these statements are false.