TL;DR: One blog post is for Rotten Tomatoes and the other is for Metacritic.
I really just wanted to point out another solid Empirical Bayes resource, as there's not that many about. Yours and David's make a good combination covering different cases.
Also, MCMC for ratings? Surely you jest. If the author had touched on mixed models, then maybe it would make sense. But given the sample sizes involved here, and the noise in the variance estimates, I recommend that the author investigate mixed models tout suite if they do in fact care about the sources of shared and unshared effects on variance. Because that is what mixed models do.
Regarding MCMC, one of the things I try to emphasize throughout the post is that the best solution depends on your needs (for example if you want a full posterior). In fact, most of the post is devoted to quick and simple methods -- not MCMC -- because they are good enough for most purposes. I welcome your feedback though on how I could make this point clearer.
1. If there are no ratings, Bayesian average is close to overall mean, and
2. If there are many ratings (how many depends on how big the site is), C and m do not affect the result much.
You probably can do a little better if you have a lot of data and ability to run A/B tests, but for vast majority of cases pseudocounts work just fine.