Quick proof of this: as the number of terms n in the sum goes to infinity, the ratio of each term to the previous one is approximately 1/4 - the first factor contributes m/(m+1), the second q/(q+2) for some m and q that go to infinity along with n, the third contributes 1/4.
If we counted base 4, then the value of each digit would be on average 1/4 of the previous one, certainly for a normal number like pi. But we count base 10, so we get log_10 4 decimal digits every time we get one base-four digit. Which is very close to 0.6.
But if I enter 100k, it takes 30 seconds to get to reporting 10k digits worth of progress.
Hmm. Have to think about that one. Just cause it's asking JS to do comparisons of much larger numbers?
Rendering the digits in decimal was significant delay (like 40 seconds for a million digits). That's why it just shows the hex until the very end, and it shows how long the decimal conversion takes.
Does anyone on the Spidermonkey team have some insight?
The open issues this bug depends on is a pretty good list of bigint-related performance enhancements that haven't been implemented yet.
edit: There also some nice formulae for quick convergence in this article: https://julialang.org/blog/2017/03/piday
EDIT: I understand now, the numerator on the first term is ascending odds and the denominator is ascending evens. Thanks for everyone's help!
So for instance, the 3/5/7 is just the (2n+1) value.
The other looks like the sum of the product of (2n-1)/(2n) for values 1 to n.
You are left with this (in the 4th line):
1 3 5 1
- - - -
2 4 6 7
The pattern I see is that, starting from the top left and reading numerator, denominator, numerator, denominator and so on gives: 1 2 3 4 5 6 7 if you ignore the last numerator.
I may have it wrong, but that looks like the pattern.
And I'm sorry if it looks like 13 over 24. It's supposed to look like 1/2 times 3/4. (The next term adds 5/6, and so on.)
equivalently you just list odds on the numerator and evens in the denominator
The Chudnovsky brothers’ algorithm computes 14.18... digits per term. Its implementation in Scheme is only about a couple dozen lines of code. It computes a million pi digits in about 17.5 seconds on raspberry pi 4 in Gambit Scheme (57 seconds on the original raspberry pi, IIRC).
The hosting is static pages on S3.
I wonder if there's anything I could do to avoid the domain getting flagged.
Maybe it's because the root page on the domain is just a quote.
Anyway I wouldn’t worry too much. It’s likely just stodgy corporate environments that put those kinds of controls in place.
bc -l <<< "scale=$n; 4*a(1)"
Learned three new things making a dumb HN joke... not bad!
http://numbers.computation.free.fr/Constants/TinyPrograms/ti...
I did the same thing when I was testing it. I would keep an eye on the system RAM resources graph as the script was running. Watching the RAM start to spike was oddly satisfying.
It's pretty scary to me how easy it is to crash a browser these days with something so simple.
let y=3n*(10n**1000020n);
const f=(i,x,p)=>{(x>0)?f(i+2n,x*i/((i+1n)*4n),p+x/(i+2n)):p/(10n**20n)}
console.log(f(1n,y/8n,y));
Not sure if I can golf it anymoremap$l+=(-1)$_/(1-$_2)4,1..<>;die$l
It is fun to scroll down and watch the "cutoff", where digits above that are not changing and digits below that are. That's just as fun in hex as it is in decimal. But yes, maybe I should add an option to turn that off.
$\pi = 3 + \sum_{k=1}^\infty 3 \frac{(2k-1)!!}{(2k)!!} \frac{1}{2k+1} \frac{1}{4^k}$ [1].
n!! is double factorial, the product of odd or even numbers up to n (depending on whether n is odd or even) [2].Edit: added a simpler series from https://math.stackexchange.com/a/14116:
$\pi = \sum_{k=0}^{\infty} \frac{(2k)!!}{(2k+1)!!} \left(\frac{1}{2}\right)^{k-1}$
[1] https://imgur.com/a/YtA8kUxThe actual formula looks much less friendly (because it's tricky to write "the product of the first n odd integers"), but it's a good exercise for those who are inclined.
Got you beat, js.