Presumably most _actual_ compilers don't produce the correct solution 100% of the time (i.e. they have bugs), but I think it's reasonable to say that the compiler _understands_ ${programming language}. Maybe the difference between "understanding" and "just memorizing answers" is more subtle than often portrayed?
Is that true for interpreting programming languages? If so, a bug isn't just ``I haven't seen a similar enough example''. It reflects a deeper mistake that will likely occur again.
(factorial 5) 120
The author should at least have tested a counter-example
(setf factorial (red hot chilly nonsense))
I expect the 'evaluation'
(factorial 5) 120
to be 'correct' regardless of the function definition!