Postgresql does that because the initial number is based on samples and statistics and heuristics, and is produced by the query planner as a side effect of making the query execution plan. I imagine Google, too, has a query planner.
One, google has a strong incentive to avoid underestimates: Underestimates have a fairly high risk of causing RAM/CPU problems since the actual work that must be needed is higher than estimated, and that's an evil ugly problem when processing input from potentially malevolent anonymous users on the net.
Two, the simplest algorithm to compute the estimate is one that'll overestimate if you choose a good combination of search terms, such as two words that hardly ever occur together. The planner's statistics will know how common each search term is, but may not know that this particular combination is very rare.