No FFT? No simplex algorithm? Hashing? Strassen's?
In addition to several algorithms already mentioned, I feel that suffix trees and suffix array algorithms should be there as well. They are making all kinds of approximate searches feasible in bioinformatics.
A cool 19th century algorithm is Radix Sort, though.
All of signal processing went from, "We'd like to do a Fourier Transform but can't afford O(n^2) so here is a faster alternative that kind of does something useful" to, "We start with a FFT."
Neural nets opened a lot of doors.
Bellman equation is in a lot of equipment.
Also I don't really think taking a Taylor series for the inverse square root should count as an "algorithm."
Deep belief nets have made things that look like neural nets popular again, but things like sparse ICA stacks is suggestive of another wave of theory led morphing back to non-neural architecture. But the point is that the theory followed after the invention of neural architectures both times. So the root is the ANN, and thus I think it is a really important algorithm in history. It was a bio inspired innovation too which is interesting.
It's an efficient way for performing Lasso (L^1-penalization) to regression models, which has the benefit of (in addition to reducing risk of overfitting) producing sparse models.
SIAM put out a 'ten algorithms of the century' https://www.siam.org/pdf/news/637.pdf a few years ago and it's really tough to argue with six or seven or eight of them.
(MCMC, simplex, Krylov, Householder decomposition, QR, Quicksort, FFT, Ferguson&Forcade's integer relation stuff (that led to stuff like PSLQ), and fast multipole)
And FORTRAN.