Exactly! The problems arise because of the disconnect between what the math is actually saying and what people think the math is saying. Or rather: what people wish it was saying. Frequentist methods give you "if page A performs the same as page B then then the likelihood of observing something at least as extreme as this measurement is less than X%". In practice we never want to know this information. What people actually want to know is "given this measurement, the probability of page A being better than page B is X%", so they interpret whatever number comes out of the frequentist method like that...wishful thinking.
Just give them 2 posterior distributions of the conversion rate of page A and page B. It may look more daunting than a single number at first, but it's much easier to interpret than that single number that comes out of hypothesis testing, and, you know, it's the information they actually need to make a decision whether to pick page A or page B.