This is surprising to me. I'd have guessed it decreases quadratically (i.e. due to the inverse square law), not exponentially.
The paragraph below seems to contain an explanation, but I don't really understand it (namely because I don't know what that percentage "Coverage" column actually means, or what we mean with "the total distance at each QAM step").
Each data rate in the standard uses a different encoding technique. "Faster" encoding techniques cram more data into a given transmission interval but require a higher signal to noise ratio to be received without error. Since SNR declines with distance you can have a rough idea at what distance from a transmitter you will be able to receive at what data rate.
However, people and vendors focus far too much on maximum throughput. I've seen data showing that even in the best conditions, clients spend about 1% of their time transmitting or receiving at the highest data rates. Because they are dynamically adjusting the data rate based on the perceived SNR.
Individual clients' peak throughput also works against _aggregate_ throughput when talking about wireless networks with multiple users. If you have 100 clients, do you want one to be able to dominate the others or everyone get a more or less equal share? These peak speeds assume configurations that I would never deploy in practice, because they favour individual users and cripple aggregate throughput - things like 160 MHz wide channels.
But the sticker speed is what sells..
Correlated, but obviously bad code can really fuck with neighbors. And each client has an incentive to be greedy so users of that client get a better experience. So you fall back again to QOS for what you care about..
Basically this. They way we usually put it is that we want clients to "get on and off the channel as quickly as possible". That requires all clients in range of each other to be behaving (respecting the rules) and using fast enough data rates to minimise their consumption of precious air-time.
Under the hood though, it's a very granular frame-by-frame, almost nanosecond-by-nanosecond thing that leads to the overall throughput at a human timescale. To give you a sense, let me try to summarise the factors affecting throughput this way:
- Data Rate: the transmitting client can adjust the data rate of each frame up or down per frame if they want. For example, a single TCP session on a 2.4GHz channel could in theory see data rates everywhere between 1Mbps and 450Mbps. But in practice most drivers I've seen adjust up or down incrementally. And in a healthy network, they usually hover around the top 25% of the mutually supported data rates (but they also spend very little time at the highest data rate, typically less than 1%). Also the AP could be using different data rate to the client, and usually is. The rx and tx directions are effectively separate streams and data rate is always chosen solely by the transmitter.
- Block Size: Similar to TCP windowing. Data can be sent in multi-frame 'bursts' before an acknowledgement is required by the transmitter for it send more. In the original Wi-Fi, every frame had to be acknowledged. Later standards introduced this idea of block acknowledgements.
- Re-transmits: Whenever acknowledgements are not received, the data has to be resent. Block size will be reduced, possibly to 1, so it will also take longer. Note that re-transmits are expected and very routine in Wi-Fi, whereas in TCP they are usually considered more of an exception (except on the internet). I've observed re-transmit rates of 20% in networks where no user is perceiving any sort of issue at all. So Wi-Fi is very robust to frame loss, up to a point, but even so, re-transmits do end up having a large impact on the aggregate throughput.
- Clear channel wait time: It's no exaggeration to say that transmitters spend most of their _waiting_ to transmit. And a big chunk of that wait time is just waiting for the medium to be clear - the clear channel assessment. If the client thinks there is a transmission going on, it just has to kill time.
- Other wait times: Even when the channel seems clear, there are various requirements to do nothing before and after transmitting. For example, the inter-frame spacing interval and the random back-off interval. These are just the rules of play. In fact, congestion avoidance on Wi-Fi could be said to be entirely a matter of timing.
Note that these are a simplification and clearly I can't mention everything or cover all the nuances. But, in the way I've framed it here, the clear-channel wait time and the re-transmit rate do basically encapsulate the impact of intangibles I didn't mention, like congestion and noise/interference.
TLDR; Wi-Fi transmissions are extremely lumpy at their native timescale, but many seem a lot smoother than many TCP transmissions at human timescales.
> Correlated, but obviously bad code can really fuck with neighbors.
Also true. Bad code is usually exemplified in Wi-Fi by bad drivers (looking at you Broadcom). These will cause clients to "stick" to bad APs when they should roam, or pick the wrong channel/AP/band in the first place. Intel is generally very good.
> And each client has an incentive to be greedy so users of that client get a better experience.
Greed is good in the sense that clients want to transmit their data as soon and as fast as possible and we want them too! But they have to respect the rules. Of course there's only a handful of chipset vendors so they mostly do. But within that, there's still plenty of room for clients and APs to do things that are _sub-optimal_ even if they are Wi-Fi legal, as per the sticky client example I mentioned.
> So you fall back again to QOS for what you care about..
Wi-Fi does indeed have its own implementation of QoS which is of course a timing dance! But I think you're referring to QoS in higher layers like IP. So it's worth mentioning that this WiFi stuff is all happening at layers 1 & 2. All the congestion detection and re-transmissions and so on that may be happening in higher-layer protocols like TCP are happening _in addition_ to what is going on at the WiFi layers.
If you have a good connection and are successfully able to transmit packets to your AP at 600Mbps, and your neighbour has a poor connection and is transmitting at 6Mbps to his AP at that moment, you literally have to wait ~100 times as long for a free medium before you can attempt to transmit. And that's for every single frame. Then you have to hope his client is well-behaved enough not to transmit while you are transmitting. Otherwise you end up having to wait again and retransmit anyway.
You might not notice this with only 2 clients. It might be the difference between a 80MBps and a 50MBps download for example. But it decays exponentially with the number of clients.
But you're technically correct!
But then again, the sentence uses the term "signal strength", not "throughput", so that would suggest quadratically. But I guess "signal strength" could be meant colloquially and mean more than just the raw signal power received by the antenna, here.
It's all very fuzzy to me, as it stands.
Where is it pretty common? I have never heard that (outside of being a mistake)
Because the variable is the base, not exponent.