And you also can't set up your system with a 5 second delay to send data every 5 seconds, because any jitter will result in hiccups.
You could set up your system to send out the new data each time the previous buffer is acknowledged, but that's kind of pointless, if you get lucky with a good connection and can send data to be rendered 4.970 to 5.000 seconds from now, what's the difference for the user between doing that versus reducing the network load by approximately a factor of 3 and waiting until you have data for 4.900 to 5.000 seconds?
I think 100ms is a reasonable minimum batch size.