A 60 page document to explain how to compress text name/value pairs that are mostly unchanged between requests is "reasonable"?
A static compression table with proxy authentication fields in it, as if the network between your browser and LAN proxy is somehow the bottleneck, is reasonable?
This is a most over-engineered protocol, one where nobody knows how tables were constructed or from what data and nobody can quantify what the benefits its features will really be. For instance how much is saved by using a huffman encoding instead of a simple LZ encoding or simple store/recall or not compressing it? Nobody knows! Google did some magic research in private and decided on the most complicated compression method, so therefore HTTP/2 must use it.
This HTTP/2 process is insane.