Bayesian stats tend to use likelihood ratios or Bayes factors instead of p-values for hypothesis testing.
The trick in all cases is that you're comparing to expected results given some prior distribution. Most people use a dumb prior (e.g. Gaussian) and then they're confused when the numbers make no sense as data is multimodal or heavy tailed, thus mismodelled.
And then there's this, that even if your intro to probability course everywhere covers the classic statistics with p-values and hypothesis testing and frequentist confidence intervals and so on, you are not necessarily going to use them that much. I calculated some p-values and other tests with R for some example datasets a couple of years ago and never seen them since in coursework, everything we've done after that has been more or less fully Bayesian. The concepts are still fresh[1] in my mind mostly because I read some statistics blogs, such as Andrew Gelman's [2]. The irony is that Gelman does not exactly love frequentist framework, he just mentions its concepts often enough.
[1] or not totally forgotten