There is currently a lot of hand-wringing in academia about open access publications. Everyone wants it, and it is trivial to switch a field to it (machine learning has done so, for the most part), but it requires the leaders in the field to lead the change and they are normally too invested in the status quo. What the high ranking of arxiv suggests to me is that while people maintain lip service to the idea that the (mostly closed) publications are important and the maintain the definite version of a publication, the reality is that no-one gives a damn and goes to arxiv when they want to read something.
Anyway, given that in some fields _everything_ that's written goes to arxiv, arxiv will [by definition][1] have a very high h-index. The thing is, the h-index was conceived to compare individuals, not journals.
One problem with impact factors is the way that a few articles can account for the majority of citations. For instance, a bioinformatics method that is widely used could attract thousands of citations, boosting the impact factor of the journal by a few points. This method doesn't solve this, as it expressly focuses on the top n articles and ignores the impact of the remainder. For instance, PLoS One's score of 100 is because the top 100 articles got 100 citations - it says nothing about the distribution of the rest.
In particular, it's not robust to one factor often mentioned in the bibliometrics literature, trivial changes in agglomeration size. Say a set of 200 articles are published by either: 1) a single journal; or 2) two journals, which publish 100 of them each. In each of the hypotheticals, individual articles have the same citation counts. Under this metric, #1 gets a higher ranking, meaning that you can raise rankings without increasing paper quality by just agglomerating journals. (You can even run the two former journals separately inside the new journal if you want, with a two-track review structure, as long as there's only one title on the front page.)
Incidentally, Microsoft Academic Search is pretty impressive so far. They've added many features. They also have an API that is pretty easy to use, which Scholar doesn't.
EDIT: It would be fair to say that since a database is so widely cited it is important. So maybe the index is more robust than I originally considered. But something still seems skewed here.
In summary, the h5-index is simple to understand, hard to
manipulate, and provides a reasonable if crude measure of
the respect accorded to a journal by scholars within its
field.
While journal metrics are no guarantee of the quality of a
journal, if they are going to be used we should use the
best available, and Google’s h5-index is a big improvement
on the ISI impact factor.
[1] http://robjhyndman.com/researchtips/google-scholar-metrics/