It's perfectly possible for something to both be incredibly popular and to suck. An appeal to popularity is certainly not better than a click-bait headline.
I'd argue that most of the stuff we use sucks in some ways as well; suck isn't an absolute term. TCP sucks in very specific ways the article talks about.
Compare “TCP Sucks” (easy to read—communicates, concisely, that this is an opinion piece, and what the opinion is)
or “Limitations of TCP” (misrepresents the article as informational)
(Also, if you're designing a communications system, don't give it the same name as another communications system.)
TCP has its limitations, of course. It was designed to work over a wide range of connections, including dial-up, and it does. It's suboptimal for broadband server to client connections where big server farms from a small number of vendors dominate. Hence QUIC and HTTP/2/3. Still, those don't provide a huge improvement in performance.[1] Even Google merely claims "on average, QUIC reduces Google search latency by 8% and 3.5% for desktop and mobile users respectively, and reduces video rebuffer time by 18% for desktop and 15.3% for mobile users." That's marginal. An ad blocker probably has more effect.
The author is worried about the overhead of the three-way handshake, but the overhead of setting up TLS is far worse.
[1] https://conferences.sigcomm.org/imc/2017/papers/imc17-final3...
I’ve tried to design stream transports on top of UDP. It is doable if the scope is narrow and you actually understand a bit of what went into other protocols (like TCP). But it isn’t easy.
1) TCP checksums are a bit weak.
The author claims that we don't know how many errors are not caught by TCP's CRC and while that's true on some level the probability of bit errors is driven by properties of the physical medium over which the packet is traveling, not the transport layer protocol in use. That's why most, if not all, physical layers that do have a significant chance of bit errors have FEC built-in at the physical layer.
As a point of reference, I pulled stats on one of our servers and see about 10k bad packets out of 1.3 billion. If we assume that bit errors caused all 10k, we get an error rate of about %0.00076. Since we need two independent bit errors to trigger a state where the CRC might not catch the error, we can calculate that on this particular server we would expect .076 packets that might be corrupt in a way in which it's even possible for the existing checksum would miss it (and it probably would still be caught). If that is an unacceptable error rate, you should absolutely be using some way to verify message integrity at a higher layer. Still, for most applications, a fast hash that lets a bad packet through every couple of years is a good trade-off.
2) TCP's over-eager in-order delivery
The author talks about getting packets 1,3,2 in that order in the context of a file transfer, and in that specific scenario, it might be marginally preferable. Far more likely than out of order packets are lost packets and lost packets may never come if the connection gets closed/reset before its retransmitted. For the vast majority of applications having a partially transferred entity isn't helpful, but not having to deal with the complexity of out of order and partial transfers is.
This issue, however, is so overblown by the author. It's not that often earlier packages delayed and new packages arrive faster and when that happens often you problem is usually much greater than server waiting for 101-200 before sending 201-300 to application...
To me the most annoying part of TCP are:
- Slow Start - Single channel - Single Mode
Everyone life would be so much better if within a single established connection you could open multiple channels in multiple modes like framed vs stream. Each channel could have its own order with congestion control overseeing entire bunch of channels.
You can build something similar on top TCP today, but it's not the same. Best part of TCP is that it's a standard available everywhere without conflicting implementations.
I personally don't think IP (tcp and udp included) will be used on Internet 2.0. Theres just too much baggage with those protocols.
Perhaps we'll see Internet 1.5 leverage encapsulation and build a huge underlying canvas for new protocols to blossom, but we seem to be approaching the point where anything with an IP address is too much of a hassle to maintain and a longterm security liability/commitment.
Just moving IP to the borders of a network would open up space for more secure protocols.
Its probably time we put the middle one behind port 79 anyways.
From my relatively-inexperienced perspective, notwithstanding the widespread protocol ossification of Internet network infrastructure, the underlying MTU for each connection seems to be the primary constraint that any viable alternative must deal with effectively. This article did not seem to discuss MTU considerations at all. It's hard to take a protocol argument seriously that neglects to deal with such a fundamental constraint of network infrastructure.
The main problem is that namespacing is an extremely political process in any system, including NDN, and so we cannot have nice things.
It was never designed to be the all applicable universal transport protocol either. Or it was, but sane people realize that time window when TCP could be deemed universal is very short.
But... before you get too caught up with bashing... if I were you I would spend just a tiny bit trying to really understand why we are in this situation.
It might be because it is "just enough" to build upon.
You are free to use bare UDP or even bare IP for your application if you are masochist or have spare budget to allocate for fun NIH projects.
For some reason none of these projects get traction.
As an analogy, consider high power factor power supplies. Nobody is gonna care, at home, what the power factor of their PSUs are. However, a poor power factor at a large scale (electrical grid) translates to millions of dollars of unused current capacity. The money left on the table was so large that PFC is everywhere these days. The same thing will happen with TCP's replacement. Just give it time.
Game-theory view is that any organization that would like to push this would have to spend enormous amount of resources to reinvent almost anything that has anything to do with TCP. Remember, it is implemented in hardware in many different types of devices, stacks, applications, it permeates almost anything. The application I am working on right now which has lived for over a decade and will live for another has TCP artifacts all over it. Who's going to want to fix it when it gains maybe a tiny bit of additional performance?
There aren't very many applications that have TCP as their single biggest performance problem with best ROI. Almost every is integrated with bunch more other applications over TCP causing chicken-egg problem.
Maybe Google could do that? IDK. They would maybe do it after they have reinvented almost everything else in their DC infrastructure.
For internal traffic they have the option to choose custom congestion controllers that don't care about being fair to Reno and can also tolerate a few packet losses. Linux has offered options here for a long time.
My entirely uninformed opinion is that if its possible to do everything TCP does over UDP we should do so just based on the fact that it is a simpler protocol. This means that broadly speaking I think the QUIC protocol is an improvement. (Whether or not its worth implementing the change I have no opinion on.)
Obviously this is a very naive understanding and I welcome others telling me where my simple model breaks down. Understandably the complexity is not removed and is only pushed up the protocol stack but I would argue that the higher complexity belongs at the application layer rather than the transport layer.
UDP is actually a nightmare precisely because it is so simple. There's nothing more than incoming data to your network on a particular socket. Where should your router send it after NAT? There are so many RFCs and articles on UDP NAT traversal and you'll find it's never as reliable as plain old TCP. Good luck with UDP hole punching when 2 clients pick the same port to receive data (so many games do this and if you find yourself and someone on your local network can't play this same game online at the same time this is why).
Oh and if you actually do need packet ordering you'll now be wasting time re-implementing what is essentially TCP within UDP. The major advantage of UDP is it gives the option of not caring about packet ordering which is good but only for very very specific use cases (lossy voice/video communication and games).
The article also talks about how TCP doesn't know the speed of your network and has a slow ramp up. You know what UDP does? It sends out data as fast as you tell it to and then if that's too much for the network it's simply dropped. TCP stops you sending to much.
That's just a start too.
So unsurprisingly TCP has complexity. If you do go down the path of UDP which can only ever be justified due to a very specific need you'll need all that TCP knowledge and more as you'll inevitably have to re-implement much of what TCP does.
Routers read the contents of TCP packets to figure out the state of the TCP connection, and then make decisions based on the state of that connection. With UDP that doesn’t work. This makes a HUGE difference for NAT. With UDP, there’s no connection and no state, so at best a router might guess correctly using some loose heuristics, but at worst the router will just drop your UDP packets on the floor.
Because TCP does far more than UDP.
if it's possible to do everything TCP does over UDP we should do so just based on the fact that it is a simpler protocol
The complexity of TCP is mostly needed, so reimplementing the same complexity yourself on top of UDP doesn't really gain you anything. The overall result isn't simpler.
Everything which can be done with a truck can be done with a horse wagon.
Yeah, that analogy doesn't fully work, but QUIC is no perove of UDP being the right tool, QUIC is an iteration of TCP. Looking at some shortcomings of TCP (in the view of it's creators) and trying to fix those. If it is successful the chapters in your book will be as long as those of TCP and probably even lots of the content will be similat. It will do some form of connection negotiation (even more complex than TCP as it enforces TLS, which in TCP is fully in the application layer) and flow control. Basing it on top of UDP instead of using IP directly is not due to UDP being best, but routers, firewalls and operating systems being able to handle it. A new IP packet type would need lots of work by everybody.
For many usecases TCP works well and you should avoid adding another dependency.
Over-engineering and technical debt is a bi-product of: “It is difficult to get a man to understand something, when his salary depends upon his not understanding it!” - Upton Sinclair
On the backbone nothing uses TCP, not even IP. It's all BGP or other proprietary formats. Edit: I'm pretty sure the backbones with billions invested in fiber across oceans are not running vanilla IP from the 70s on their hardware/cables, don't you? Or if they do then it's case closed for anything else really, so any way you turn this.
Re-implementing delivery guarantee wont solve anything better ever, TCP and HTTP/1.1 are the final protocols for the human race in this universe, get over it and start building something useful on top of them instead.
Do what? "The backbone" usually refers to the core networks internet service providers and so on use. They carry whatever higher-layer protocols (including TCP and UDP) that their users want to use. It's like saying "on the highway nobody uses cars". The two are different parts of the stack.
Also, BGP is a routing protocol (to share IP reachability information) which runs over TCP. It's not a replacement for TCP. I can't download cat pictures over BGP. Are you referring to things like MPLS? Even that isn't a TCP-level thing. It's a halfway between layer 2 (Ethernet and friends) and layer 3 (mostly IP). TCP is still on top of all of this.
BGP is not a protocol for moving data - it's a way to find the path to deliver IP packets. Nor is it proprietary :/
.... BGP runs over TCP.
> TCP and HTTP/1.1 are the final protocols for the human race in this universe
Right, QUIC is only used by all things Google.