-- 1 --
A very effective way to diagnose some kinds of problem in numerical software is to run it with different settings for floating-point rounding. That way, if you're using an algorithm whose outputs are pathologically sensitive to small variations in their inputs (or in intermediate results), you're likely to be able to tell because the final results will differ by more than a few bits in the lowest places.
Unfortunately, support for doing this is lacking in most programming languages and environments. This is a Bad Thing.
-- 2 --
When doing FP computation that mixes single and double precision, it is tempting to treat mixed-precision operations as single-precision rather than double precision. The latest version of MATLAB (at the time when Kahan gave this talk) does this.
This can be very bad, because doing more of the computation than necessary in single precision can produce needlessly inaccurate results and therefore slow down convergence of algorithms (or just plain screw them up).
This is also a Bad Thing.
-- 3 --
Computer architectures, languages and programming environments should be designed so that following the path of least resistance leads to good, not bad, numerical behaviour. Lots of more detailed proposals along these lines can be found on Kahan's web pages.
Making this happen is a big job. Kahan is likely to be dead before it's finished. So go and make it happen.
On PPC/ARM, single-precision allows you to use Altivec/NEON, which can give significant performance boosts as well.
[Apologies that this is Offtopic, but I was unsure where to post this question].
(Scridb is a ycombinator company.)
But yeah, afaik arbitrary precision on GPUs isn't possible with reasonable performance.