(a,b)+(c,d) = (a+c,b+d)
(a,b)/sqrt(5) = (b,a/5)
(a,b)(c,d) = (ac+5bd,ad+bc)
2phi = (1,1)
F(n) = (2phi^n - (2-2phi)^n)/(2^n*sqrt(5))
[edit] As ot said, the right word is "extended"
I think "extended" is the right word (see http://en.wikipedia.org/wiki/Ring_extension)
It is commonly denoted as Z[sqrt(5)]
As far as I can see, this algorithm is very similar to the matrix operations of the article, at least in terms of complexity.
[1] Only the very last step might involve fp arithmetics, because you'll have to convert the result pair (x,y) back to x+y*sqrt(5). However, we already know that the result has to be an integer, so the its second part will always be 0, no fp arithmetics required.
Eleven seconds to compute the 1,000,000th Fibonacci number. Compared with the naive method which takes 143 seconds.
Code below:
def add(a, b): return (a[0]+b[0], a[1]+b[1])
def sub(a, b): return (a[0]-b[0], a[1]-b[1])
def divsq5(a): return (a[1], a[0]/5)
def mul(a, b): return (a[0]*b[0]+5*a[1]*b[1],a[0]*b[1]+a[1]*b[0])
def subi(x, a): return (x-a[0],-a[1])
def pow(b, e):
r = (1, 0)
while e > 0:
if e & 1 == 1:
result = mul(r, b)
e = e >> 1
b = mul(b, b)
return r
twophi = (1,1)
def fib0(n):
return divsq5(sub(pow(twophi, n), pow(subi(2,twophi), n)))[0]>>n
def fib1(n):
a = 0
b = 1
while n > 0:
(a,b) = (b, a+b)
n -= 1
return a ;this calculates (a+b√sqr)^n, using exponentiation by squaring
(def awful-thing (n a b sqr)
(afnwith (n n i a r b it 1 rt 0)
(if (is n 0)
(list it rt)
(even n)
(self (/ n 2)
(+ (square i) (* sqr (square r)))
(* 2 i r)
it
rt)
(self (- n 1)
i
r
(+ (* i it) (* sqr r rt))
(+ (* i rt) (* r it))))))
(def fib (n)
(with (pow2 (expt 2 n)
hh (map - (awful-thing n 1 1 5) (awful-thing n 1 -1 5)))
;now hh = '(a b) where (fib n) = (a + b*root5)/(root5*pow2)
;a should = 0
(/ (cadr hh)
pow2)))(The asymptotic complexity is surely the same, in any case.)
It's straightforward (if messy) to calculate two rationals exactly, then approximate √5 to within certain error bounds.
I wrote some code which does this, just as an experiment... and it's unbearably slow. Just try it.
import Control.Arrow
import Data.Ratio
newtype Matrix2 a = Matrix2 (a, a, a, a) deriving (Show, Eq)
instance (Num a) => Num (Matrix2 a) where
Matrix2 (a, b, c, d) * Matrix2 (e, f, g, h) =
Matrix2 (a*e+b*g, a*f+b*h, c*e+d*g, c*f+d*h)
fromInteger x = let y = fromInteger x in Matrix2 (y, 0, 0, y)
fib n = let Matrix2 (_, x, _, _) = Matrix2 (1, 1, 1, 0) ^ n in x
binom n =
scanl (\a (b, c)-> a*b `div` c) 1 $
takeWhile ((/=) 0 . fst) $ iterate (pred *** succ) (n, 1)
evens (x:_:xs) = x : evens xs
evens xs = xs
odds (_:x:xs) = x : odds xs
odds _ = []
iterate' f x = x : (iterate f $! f x)
powers b = iterate (b *) 1
esqrt e n = x where
(_, x):_ = dropWhile ((<=) e . abs . uncurry (-)) $ zip trials (tail trials)
trials = iterate (\x -> (x + n / x) / 2) n
fib' n = (a, b) where
d = 2 ^ n
a = sum (zipWith (*) (odds $ binom n) (powers 5)) % d
b = sum (zipWith (*) (evens $ binom n) (powers 5)) % d
fib2 n = numerator r `div` denominator r where
(a, b) = fib' n
l = lcm (denominator a) (denominator a)
r = a + esqrt (1 % l) (b * b / 5) + 1 % 2The closed form complexity is not the same (not sure how it could be). Here's the code written in standard most language psuedocode:
fib(n) {
double c = 1.6180339887;
return round((power(c, n) - power(1-c, n))/sqrt(5));
}
Note: I assume your point wasn't that computing the golden ratio exactly is... well ummm... time consuming.The code below consists of declaring that S is a sorted version of L as long as S is a permutation of L and S is sorted.
permutation_sort(L,S) :- permutation(L,S), sorted(S).
sorted([]).
sorted([_]).
sorted([X,Y|ZS]) :- X =< Y, sorted([Y|ZS]).
permutation([],[]).
permutation([X|XS],YS) :- permutation(XS,ZS),select(X,YS,ZS).Nonono, Bogosort is way worse than naive recursive fibonacci - the former doesn't even guarantee termination, recursive fibonacci still does.
If you want to calculate fibonacci numbers not as a misguided exercise in algorithms but actually efficiently, use an algebraic form: http://en.wikipedia.org/wiki/Fibonacci_number#Computation_by...
[Edited to add: thanks for the replies. I think I expressed myself poorly here. It’s not that I don’t understand the difference between “with probability 1” and “always”. What I mean is that I don’t understand why people sometimes make a big deal out of it, and say “that algorithm is not even guaranteed to terminate” as though that somehow means the algorithm is untrustworthy or useless. The trouble with the game “roll a die until you get 100 sixes in a row” is that it has an astronomical expected running time – not that it might _never_ end, but that it will almost certainly take a very long time.]
Unfortunately, it isn't. This is the reason why in mathematics, we say that "possibility 1" means that an event is "almost sure", which is different from a "sure" event.
For instance, you can play a game where you roll a dice over an over again. You win if you get 100 times a 6 in sequence. If you play this game without any time limit, your possibility to win is exactly 1, which means that you can be "almost sure" you'll win in the end. However, you can't be sure, because there is still the possibility that you don't get 100 times a 6 in an eternity.
Note that this only happens in infinite probability spaces. In finite spaces, "almost sure" and "sure" are equivalent.
Also note that the same holds for "probability 0" which means "almost impossible", not to be confused with "impossible".
Yes.
Particularly when done on real machines that use pseudo-random numbers that will repeat in finite time. For instance many use http://en.wikipedia.org/wiki/Mersenne_twister with a version that will repeat after 2^19937-1 steps. With a random array of 2500 elements, the odds are very good that bogosort will fall into a loop before it has successfully sorted the array.
http://en.wikipedia.org/wiki/Almost_surely
Bogosort almost surely terminates.
And the worst case of bogosort is not to terminate.
[1] http://en.wikipedia.org/wiki/Fibonacci_number#Closed-form_ex...
http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-11.html...
The key point is that he is trying to calculate arbitrarily large fibonacci numbers, and so the addition is itself an O(n) operation. This means that even after memoization, the complexity is still O(n^2), and he uses a number of tricks to reduce that.
But it's a nice discussion in any case.
That's what I call a bad algorithm!
What's nice about it is that it is deterministic yet ridiculously slow.
It can also be really easily implemented in Prolog[1] where you simply define what a permutation and being sorted means. After that you just search for a sorted permutation.
[1] http://rosettacode.org/wiki/Sorting_algorithms/Permutation_s...
http://en.wikipedia.org/wiki/Quarantine_%28Greg_Egan_novel%2...
Since the Fibonacci numbers are defined by a linear recurrence, we can express the recurrence as a matrix, and it’s easy to verify that...
It's been way too long since my math minor for me to understand that.
I think it’s only the jargon that’s confusing you. As long as you can remember (or look up) the definition of matrix multiplication, then it genuinely is easy to verify.
[ fib(n-1) fib(n) ] x [0 1]
[ fib(n) fib(n+1) ] [1 1]
= [ fib(n) fib(n-1)+fib(n) ]
[ fib(n+1) fib(n)+fib(n+1) ]
= [ fib(n) fib(n+1) ]
[ fib(n+1) fib(n+2) ][Edit:] But there are some interesting data structures based on them: http://en.wikipedia.org/wiki/Fibonacci_heap