If you're talking about Google's homepage, the answer is "a lot". You can check for yourself - go to google.com, select "view source" and compare the amount of Closure-compiled JavaScript against HTML markup.
- We're on the verge of another browser monopoly, cheered on by developers embracing the single controlling vendor;
- We already have sites declaring that they "work best in Chrome" when what they really mean is "we only bothered to test in Chrome".
- People are not only using UA sniffing with inevitable disastrous results, they're proclaiming loudly that it's both necessary and "the best" solution.
- The amount of unnecessary JavaScript is truly gargantuan, because how else are you going to pad your resume?
I mean really what's next?
Are we going to start adopting image slice layouts again because browsers gained machine vision capabilities?
Since you're replying to my comment and paraphrasing a sentence of mine, I'm guessing I'm "people".
I'm curious to hear from you on what - if any - is a better alternative that can be used to determine the browser identity or characteristics (implied by name and version) on the server side? "Do not detect the browser on the server side" is not a valid answer; and suggests to me the person proffering it as an answer isn't familiar with large-scale development of performant web-apps or websites for heterogenous browsers. A lot of browser inconsistencies have to be papered over (e.g. with polyfills or alternative algorithms implementations), without shipping unnecessary code to browsers that don't need the additional code. If you have a technique faster and/or better that UA sniffing on the server side, I'll be happy to learn from you.
"Do feature JavaScript feature detection on the client" is terrible for performance if you're using it to dynamically load scripts on the critical path.