The jargon does vary by subfield and community, along with the actual measures used (sometimes it's just a different name, but sometimes practices are different as well). Precision/recall are terms from information retrieval that migrated into the CS-flavored portion of machine learning, but are not as common in the stats-flavored portion of ML, in part because some statisticians consider them poor measures of classifier performance [1]. Hence they don't show up in the Hastie/Tibshirani/Friedman book you mention, which is written by three authors solidly on the stats side of ML. It does occasionally mention some equivalent terms, e.g. Ctrl+F'ing through a PDF, I see that in Chapter 9 it borrows the sensitivity/specificity metrics used in medical statistics, where sensitivity is a synonym for recall (but specificity is not the same thing as precision). It looks like the book more often uses ROC curves, though, which have their own adherents and detractors.
[1] This paper is the one that most often gets cited as background by people who don't like recall/precision as metrics: http://dspace2.flinders.edu.au/xmlui/bitstream/handle/2328/2...