> Have you tried writing anything more than a very simple HTTP/1.1 parser/server?
Honestly? No. I wrote an HTTP server that runs my site just fine. It also functions as a proxy server so that I can run Apache+PHP (for phpBB) on the same port 80. (The reason I don't just use Apache is that I generate my pages via C++, because I like that language a hell of a lot more than PHP.) I also have had the HTTP client download files from the internet for various projects (my favorite was to work around a licensing restriction.)
I get around 100,000 hits a month, and have not had any problems. If you think issues will arise when I start reaching into Facebook levels of popularity ... I'm sure they will. But, I'll never get there, so to me it doesn't really matter.
So for my use case, HTTP/2 is unbelievably more challenging and costly to support. Especially as I have about seven subdomains, and nobody's giving out free wildcard SSL certs.
I also didn't even say the added complexity is a bad thing+. Modern Windows/Linux/BSD are infinitely more complex than MS-DOS was, too. I was just pointing out that the OP's elation was misguided. (+ though to be fair, I do believe things should be kept as simple as is reasonable.)
Also, I strongly challenge this notion that you have to be 100% standards-complaint with the entire RFC to run an HTTP/1 server successfully. Because not even Apache is remotely close to that. The mantra is liberal on your input, conservative on your output. And everyone follows that. And as a result, no major projects out there are spitting out headers split into 300 lines using arcane edge cases of the RFCs.