I promise you that I've considered the spec and its implications. Where are we now?
"This study is very flawed. Talking to a proxy by SPDY doesn't magically make the connection between that proxy and the original site use the SPDY protocol, everything was still going through HTTP at some point for the majority of these sites. Further, the exclusion of 3rd party content fails to consider how much of this would be 1st party in a think-SPDY-first architecture, where you know you'll reduce round trips, so putting this content on your own domain all together would be better, anyway."
In other words the guy benchmarked SPDY _slowed down by HTTP connections behind it_!!
And thus, benchmarks are unhelpful.
I care about feature sets and major improvements, not minor down-to-the-wire fixes. If this were called HTTP/1.2 or something I'd be less critical, but there are so many issues and flaws left unfixed, with unhelpful bikeshedding occurring over perceived "performance".
No, they are helpful! Especially real-world benchmarks. Sure you can cook up utterly flawed benchmarks (like the one you pointed to), but that doesn't mean all benchmarks are unhelpful. A good engineer knows which benchmarks matter, which don't. You don't seem to be able to do that.
> If this were called HTTP/1.2 ...
The mere fact you brought this up (no amount of backpedalling you may do after my comment on this) makes your criticism look even stupider. You should judge the spec based on its technical content, not based on whatever arbitrary version number was assigned to it. Talk about a bike-shed argument (http://en.wikipedia.org/wiki/Parkinson's_law_of_triviality)
Can you explain this point?