I previously saw companies, like Sun, completely fail at this. Eg. The many Java specifications that were created by the standards bodies. Sun tried to do it right by having a reference implementation with the spec. But the Reference implementations were rarely used for real work, so it proved only that the spec could be built, not that the spec elegantly solved real problems.
I wouldn't necessarily say "innovate" or "offer," but they do understand the process. You can make pretty much anything a "standard" with a bit of money and time (isn't Microsoft Office's XML format a "standard"?), but adoption is always an issue. However, Google controls a popular web server (Google search) and client (Google Chrome), so for web-things, they can create "widespread adoption" for whatever they standardize.
Google's innovation is to make http faster over a slow unreliable network (e.g. a wireless device). They solved a real world problem, proved it using their own users and now are going to standardize. Their innovation is driving their standardization efforts.
If google didn't solve a real world problem then even with their platform they couldn't impact widespread adoption. Their innovations (SPDY and now QUIC) solve real world problems, so adoption will become widespread.
MSFT with Office XML was solving a political problem, not a real world problem. Ie. Office was taking a hit because DOC/XLS were proprietary formats, and governments were concerned about archiving documents in a proprietary format and were therefore threatening to move to open standards (ie. OSS office suites). MSFT fought back by pushing through a standard document format to offer their sales staff with a rebuttal to customers threatening to move to an open standard. Ie. The 'standard' only has traction due to MSFTs monopoly on office and serves no real benefit to anyway except for MSFTs salesforce.
The difference with Office XML is that a) there's not a clear benefit to the community as a whole to adopting the new standard, and b) they don't seem to have made any great effort to encourage their competitors to implement it.
It's really messed up that all our non-xmpp (a massive majority) messaging goes over nonstandard protocols. All our daily communications are behind walled gardens; one of the sorriest states in tech today.
Hangouts going open source is one of the few ways this could change in a reasonable time frame.
In other words: very unlikely.
Google says they will eventually submit this to IETF and produce a reference implementation, but it is interesting how a company was able to quietly move a large user base from open protocols to a proprietary protocol.
50% of all communication between Chrome and Google sites is now through a path that is not standardized, nor on track to standardization, and is just special to the combination of Google's browser plus Google's sites. That sets off warning lights for some people, and for good reason.
I totally get that to experiment with a new protocol, you need real-world data. Definitely. So if say 1% or 5% of that communication were non-standard, I really wouldn't have much of an issue. But when the "experimentation" is 50%, it's on the verge of being the normal path; it doesn't seem like experimentation. Perhaps they could continue the experiment and go from 50% to 60% or 80% - there isn't much a difference at that point. In fact, if the new protocol is better, it would be almost irrational not to - if 50% is considered ok ethically, and moving to 60% saves costs, then why wouldn't you?
I'm not saying that there is something wrong here. It's not even seriously worrying, given Google's positive track record on SPDY. Still, it's very close to the edge of being worrying. That worry of course is that they expand beyond 50%, and are slow to standardize or never do so - in which case things would clearly be wrong. Again, Google has a good reputation here, given SPDY. Still, I'm surprised Google feels ok to move half of all communication to a non-standard protocol, apparently unconcerned about that worrying anyone.
1. proprietary vs. Free / Open Source (code released under an F/OSS license), and
2. proprietary vs. Open Standards (an implementation of a standard governed by an independent standards body and freely implementable.)
QUIC is not proprietary by F/OSS on the first axis, and currently proprietary rather than based on an open standards on the second axis, with a stated intent of becoming the basis for open standards works in the future.
There is, I think, a pretty good case that this is a good way to get to new open standards that are actually useful for solving real world problems.
The distinction between open-source-but-proprietary and open-standard is important for many reasons. One of the most important is that open-source-but-proprietary protocols, if they catch on, end up devolving into bug-for-bug compatibility with a giant pile of C++.
If the feature is actually an improvement, it should be on for everyone that's able to run the code as soon as possible. Ship fast and break nothing.
To address a different aspect of your comment, I do think it's very interesting how little attention we pay to the packets of data sent between software running on our personal devices and remote servers. Slap some TLS on it, and nobody even notices.
I think there's a fundamental OS level feature, and a highly visible UI component which is outright missing, allowing users to understand no just what programs are connecting to where, but what are they actually sending out and receiving. If it didn't have such horrendous implications and failure modes, I would love to have highly functional deep packet MitM proxy keeping tabs on exactly what my computer is doing over the network. You know, or the NSA could publish a JSON API to access their copy?
Personally I have a much broader use case in mind for E3X than QUIC is designed for, incorporating IoT and meta-transport / end-to-end private communication channels. So, I expect they'll diverge more as they both evolve...
[1] https://www.schneier.com/blog/archives/2012/12/feudal_sec.ht...
[1] cr.yp.to/tcpip/minimalt-20130522.pdf
I wish we were having a conversation where djb had written an amazing and performant minimaLT implementation that we could prepare against QUIC. But we're not. We're having a conversation where shipping performant code runs a protocol and you're presenting an alternative that pretty much exists only as a PDF document.
Believe me, I looked to figure out if there was a good solution for incorporating MinimaLT into code right now and there's not. I have a project where this is relevant. I'm looking at QUIC now and I may incorporate it as an alternative transport layer. (It duplicates some of my own work though, so I'm not sure whether to strip that stuff out or just make mine flexible enough to work on top.)
(To say nothing that QUIC can be implemented without a kernel module, which is a handy side-effect of doing things over UDP. A shame that's a factor, but of course it is in realistic systems deployment.)
Re. kernel module: both QUIC and MinimaLT can be implemented in user space.
It's a shame that SCTP is not more widely adopted, as I suspect it may be just as good (if not better) as a transport layer for building a new web protocol on.
The key is "Conceptually, all handshakes in QUIC are 0-RTT, it’s just that some of them fail and need to be retried" (at least the first time you contact the server a 1-roundtrip handshake is required).
Does UDP mess up other traffic?
Does QUIC make things more complex? You are replacing TCP + TLS with roughly TLS over UDP with some reliability features build in. TLS and TCP are already crazy complex (behold the state diagram from closing a TCP connection! [CS undergrad heads explode]). Plus, people have already built a number pseudo TCP protocols run over UDP.
QUIC + their kindof TLS lite protocol is certainly newer and less well know. That may make things a little harder. But ARP is complex. IP is complex. TCP is complex. Wireshark and others largely abstract this away. I'm excited by the speed, and by the hopefully reduced attack surface of these potentially simpler protocols.
I for one welcome what this would do for me on high-latency high-loss connections (read: poor cell phone coverage). I just need Apple to buy into this ...
They are limited in what they can do because they have to talk to the exchange, so its still tcp/ip for order sending, with either FIX or a binary protocol like ITCH/OUCH on top.
as far as their networking stack, if they are ultra low HFT then they'll use FPGA's and Arista brand switches or Infinband hardware.
The only big customization that most HFT firms do is move the networking stack into user land, but that's a well known area. I"m not aware of any HFT firms that write their own networking stack from the ground up though I'm sure there are a few:)
Not much that they do is transferable to everyday computing because most, I'd say 90%, of the performance comes from custom hardware and not the software.
Or put another way, Google already has more than enough talent to optimize their QUIC protocol, buying a HFT firm wouldn't do much for them as the HFT speed comes from area's that most people setting up servers won't want touch.
For example, such stacks often put the entire communication stack in userspace, with hardcoded knowledge of how to talk to a specific hardware networking stack, and no ability to cooperate with other clients on the same hardware.
Financial incentives made HFT's and alike go farther than the average software companies - just look at the microwave networks.
However, if the promise is to be an always-encrypted Transport layer (kind of like how CurveCP [1] wanted to be - over TCP though) with small performance gains - or in other words no performance drawbacks - then I'm all for it.
I'm just getting the feeling Google is promoting it the wrong way. Shouldn't they be saying "hey, we're going to encrypt the Transport layer by default now!" ? Or am I misunderstanding the purpose of QUIC?
[1] - http://curvecp.org/
The 100ms ping time in the diagram may be pretty high for connections to Google, with its large number of geographically distributed servers, but for J. Random Site with only one server... it's about right for US coast-to-coast pings, and international pings are of course significantly higher. [1] states that users will subconsciously prefer a website if it loads a mere 250ms faster than its competitors. If two websites are on the other coast, have been visited before, and are using TLS, one of them can get most of the way to that number (200ms) simply by adopting QUIC! Now, I'm a Japanophile and sometimes visit Japanese websites, and my ping time to Japan is about 200ms[2]; double that is 400ms, which is the delay that the same article says causes people to search less; not sure this is a terribly important use case, but I know I'll be happier if my connections load faster.
Latency is more important than people think.
[1] http://www.nytimes.com/2012/03/01/technology/impatient-web-u...
How does this work?
As a total guess, I assume the client gets a stream of packets, buffers them all up, waits for some threshold before re-requesting any missing sequence numbers. When that missing packet comes back in (all while the stream continued) with its new number, it puts in in place, and pushes the data up to the application and clears it's buffer. Client probably sends "I'm good up to sequence n" every once and a while so the server can clear it's re-transmit buffer.
That's pretty cool. Treat it as a lossy stream, rather than a "OH CRAP EVERYBODY STOP EVERYTHING, FRED FELL DOWN!". With this, FRED IS DED!
Others have discussed the technical aspects of what QUIC is achieving, but you can understand its purpose fairly easily by saying "QUIC" out loud ;)
If that's not clear enough, it stands for "Quick UDP Internet Connections", which I think makes it fairly clear what it achieves. You can read more about it in the FAQ: https://docs.google.com/a/chromium.org/document/d/1lmL9EF6qK...
Note that the blog post doesn't say "1% slowest sites", it says "1% slowest connections" - that's the mobile and satellite users. Think about how many seconds it takes to load google.com on your phone when your signal isn't great. How does taking a second off that sound to you?
Google is say that, for clients connecting to the same site, the slowest 1% of those clients saw a 1 second improvement in page load time by using QUIC instead of TCP. (presumably its SPDY + QUIC against SPDY + TCP as they say at the end of the article). That's pretty good.
It was 1 second shaved