But that's comparing apples to oranges. Setting a reasonable prior is akin to frequentists interpreting the effect size (including its confidence interval) in light of deep domain knowledge. To produce a good analysis using either Bayesian or frequentist methodology (or to criticise such an analysis), you have to have deep domain knowledge. There's no getting around that, and arguably the use of p-values often lets you get away with shoddy domain knowledge.
> and Bayesianism has no way to exclude noise results at all.
This statement doesn't make any sense. Bayesian methodology has plenty of mechanisms for working with and controlling noisy data (obviously, since it's one of the two key paradigms in statistics, which as a field fundamentally deals with noisy data). The precise error rates and uncertainties that are calculated are usually different from what you would use in a frequentist analysis, but most people consider this a benefit of Bayesian analysis.
The whole problem we're facing is that it requires too much domain knowledge and detailed analysis to dismiss results that are actually just noise. The whole point of p-values is that they give you a way to do that without needing that complex analysis with deep domain knowledge - they're not a replacement for doing in-depth analysis, they're a way to cull the worst of the chaff before you do, the statistical-analysis equivalent of FizzBuzz. Bayesianism has no substitute for that (you can't say anything until you've defined your prior, which requires deep domain knowledge), and as such makes the problem much worse.
Well, you can use a non-informative prior. And that's the correct choice when you genuinely don't have a better option. But you should always be able to justify that, and that in turn requires deep domain knowledge....which leads me to....
> The whole problem we're facing is that it requires too much domain knowledge and detailed analysis to dismiss results that are actually just noise.
....this is in no way a "problem" that needs fixing, by allowing shortcuts that can easily be hacked. Rather, it's a factual statement about the difficulty of drawing correct conclusions, in low Signal-to-Noise-Ratio domains. Whether you use p-values or not, and whether you use Bayesian methodology or not, you cannot get around the need to understand the data you're working with. Bad p-values are worse than none, since you have no knowledge of what error rate they actually achieve in the long-run.
> Bayesianism has no substitute for that
Yes it does. It's called Bayes factors. But as I said above, I completely disagree with your view of what a p-value is for.