I eventually learned the subject properly through High Performance Browser Networking. This is the one book I would recommend to any software developer. Available for free here - https://hpbn.co
I've easily had a dozen times where I was able to resolve a complex problem or figure out how to do the thing I wanted to do through examination of this flowchart.
The most practical thing I had to do with networking cane up in a job interview where I paired with someone to work on some low later stuff.
Not sure that starting with http requests is a good idea you need to start at layer 1 and work up.
I'll definitely have to check out the rest of their 'compendium': https://www.destroyallsoftware.com/compendium
I've served the technical writing role a few times.
If it (your product) is hard to describe, you probably did it wrong. At the very least, keep trying until things make sense. Better mental models, metaphors, workflows, whatever.
I was once asked (by a school principle, former writing teacher) why software developers are such terrible writers. I replied that all the good software developers I know are also good writers. That if you can write an essay, you can also code. The problem is that most people are terrible writers, programmers included.
I will admit that writing is harder than programming. Because people are far more interesting, complicated, nuanced than computers.
Reading someone else's code is the closest thing we have to mind reading. More so than prose. IMHO.
Miscommunication and ambiguity is the norm. We all just have to accept that and keep trying.
This was revolutionary back in the days when most protocol specs were proprietary and designed to protect the priesthood (usually of a particular vendor)rather than facilitate interoperability. It's hard to imagine now, but one of the big reasons TCP/IP won was that it actually encouraged interoperability and interworkability. (Jan Stefferud drew a distinction between those two terms in this context...)
His explanations focus on the most important bits (which he has skillfully prioritized based on his expert knowledge) and avoid highly domain specific nomenclature i.e. he gives a thorough-yet-concise explanation in a way that a reasonably intelligent lay person can understand.
Does that make sense?
Nowadays, 8b/10b (or more realistically 128b/130b) is critical to enable clock recovery by making sure the signal transitions frequently enough.
> Computers can't count past 1
Reminds me of this excellent quote: "Every idiot can count to one" - Bob Widlar
This line of thinking makes clearer what signal reflections are and why they are a problem, and what the role of eye diagrams is.
Back when most internet packets were single characters representing keystrokes over something like Telnet, an algorithm was invented to wait a moment before sending ACKs in order to possobily piggy back out on the next data packet.
This ended up interplaying with TCP slow start and adaptaive congestion control for years and was only resolved in the early 2000s if if I recall correctly.
The inventor of that algorithm posts here frequently if he wants to comment on my post about this again. :)
And this is why engineering is interesting. Your design choices can have lots of unintended side effects.
Adjust to what? Ethernet is not synchronous its asynchronous. There is no shared clock.
The interframe gap goes back to the days of CSMA/CD. The interframe gap was the period during which end stations would contend for the shared medium. Without this an end station could continuously stream and monopolize the network.
> As an example, Cisco ASR 9922 routers have a maximum capacity of 160 terabits per second. Assuming full 1,500 byte packets (12,000 bits), that's 13,333,333,333 packets per second in a single 19 inch rack!
The ASR 9922 is 20 slots of 3.2Tbps per slot. That's a 64Tbps chassis so that's 5,333,333,333 packets per second in a single 19 inch rack. Cisco's 160Tbps number is their hypothetical multi-chasis setup. Which is fun to market but non-sensical to build/purchase.
Now, why not a NACK rather than relying on duplicate ACKs? First, the NACK can get lost, so you'd probably want to trigger fast retransmit off of duplicate ACKs anyway for efficiency. But when should the receiver send a NACK? Given that packets can be reordered, likely you want to wait for a few subsequent packets to arrive before you send the NACK. In that time, you've been acking the arriving packets. So your NACK will arrive just after the sender has concluded the packet was lost from the arriving duplicate ACKs. At this point the NACK is serving no purpose.
Even if you wanted to change TCP and assume no packets were reordered, so send a NACK as soon as the packet after the missing one arrives, you'd still be ACKing the arriving packet (which due to TCP's cumulative ACK appears to the sender as a duplicate ACK). The sender could just as well retransmit after one duplicate ACK and get he same behaviour. In the end, sending a NACK doesn't really add anything.
I thought about "I lost a packet" message, but if it got lost in transmission, sender would never know to resend the packet. Instead, the sender had the responsibility of keeping track of which packets it received an ACK for. If it didn't receive the ACK, it would resend the packet within a certain interval and wait for another ACK.
If the receiver is missing a packet, it can just XOR the last 10 packets plus the correction packet to get the missing packet.