For more: http://en.wikipedia.org/wiki/Density_functional_theory
(I used to be a happy customer, back when I was a scientist.)
My guess is that all of the subsequent papers that uses BLAST as a tool have to cite it; similarly all sequencing papers cite Sanger as a tool which is why its citation rate dropped when next-gen sequencing method replaced it - which goes to show citation is not an accurate measure of scientific impact because it is equivalent of citing "Git/compiler/IDE" for a software project.
The key difference is that BLAST is not a commodity, or at least was not at the time of its release. If there was only one compiler, or one best compiler, we would cite this in CS papers. In fact, "David A. Wheeler's sloccount" is commonly cited in CS papers, as well as Weka, LLVM's Klee, and Z3.
On the other hand, nobody cites Djikstra's algorithm, or optimizing compilers, because those are considered foundational and have not changed in a long time. If BLAST was never supplanted, it would eventually not be cited because that would simply be how sequencing was done. But since it represents a new practice, citing it is necessary to place your work in the context of other work.
I don't see how you can say that BLAST or Klee were not scientifically impactful -- being the force multiplier that enables other avenues of research is possibly the most valuable thing a scientist can do. For example, should a paper that proves P!=NP become instantly the most cited paper in computer science? Though the question is one of the most deep open problems in the field, it isn't relevant at a lot of levels of study, and so it probably wouldn't ever become more cited than Klee.
Further still, I'd guess that the top 1% of cited papers are mostly methods papers, as the article bears out. The next tier are the top-notch findings papers that you mention, because they spark other research. The C45 paper probably doesn't have as much cites as Weka, because Weka is relevant beyond C45, but it certainly has more cites than a less-meaningful machine learning paper.
To carry all of this back to a software project, software projects in their Readme.md always cite their language (like a key method), always cite their dependent libraries (like specific methods used in that work), but never cite things like binary search (a foundational finding). A project MIGHT cite a compiler if that compiler is the only compiler that compiles it! In that case the compiler is not a commodity, and thus worth citing. Version control and editing are totally orthogonal to the actual software, unless you're working in a visual language or something, in which case you would in fact cite your IDE.
Citing Laemmli et al was de rigueur for many years, while it was certainly an influential technique, it doesn't rank above the discovery of DNA.
I'm not saying we should discard old science discoveries, but it would be interesting redoing the experiments with today's technology.
The thing is that "the latest paper" on the topic is very likely to change over time, but the first paper that proposed the idea is likely to remain the same.
Re-visiting previous studies with newer approaches is a common theme in archaeology, where sites or artifacts are sometimes incompletely investigated on purpose, with the view to leaving undisturbed material that can be investigated in the future using technology not yet conceived of.