It's odd that so many of those sites exist, that Google indexes them so deeply, and that they show up in searches so prominently. It's obvious that they are spam, scams, or worthless, but those same sites have been appearing prominently for years.
I agree with the author. My experience has also been that Google heavily prioritizes very large and frequently-updated sites over small static information-rich personal sites. I think it's a big flaw that needs to be fixed or for someone else to do better.
Has the time come for wiki directory of non-commercial (possibly: advertising-free, cookie-free) sites with robust, actually valuable information, and other sites that are doorways to them (think: topical forums, even revived webrings, etc)? Could this feasibly get enough action to be useful?
Some of them are curated and awesome. Some, less so. Likely some of them are even spammy.
The need is realized, but execution is hard.
You comment now comes up first, but the rest of the results all try to contact googlesyndication.com, so ads? Google will not exclude sites that literally give them money.
If Google was so evil, why would we purposefully send traffic to sites where we only get a percentage cut of the revenue?
EDIT: I realize I needed to explain this statement a little bit more. If we show ads on google.com we get 100% of the revenue. If show ads on reversephonelookup.it they get majority of the revenue. There is a limited amount of advertiser demand. Instead of manipulating the organic search results, it would be more profitable for google to just show more ads on the search page or inflate the ad price or something.
Google is converting the world into a content production factory FOR Google and they pay literally pennies for the work.
If even pennies. Consider how much content Google search now includes from Web pages where you don't even need to click into the page. Weather. Answers to questions. Some links I click keep google.Com in the url and Google processes the page and shows me what Google wants.
I don't even know anymore how much of what I see is what the creator wanted me too see our what Google wants me to see our not see.
Imagine they remove competitors ads with that. Who knows what they do in the name of making the Web better.
You can't go public, answer to no one accountable, who only thinks of MONEY and do no evil.
If Google wants to do no MORE evil. Take yourself private and live up to your credo.
It's upsetting to me that doing better than Google in search seems to be very close to an impossible feat of magic at this point.
I know that will change some day, but I can't see how.
Even if somebody gave you hundreds of millions of dollars to spend on infrastructure and employees, it would still be an insane risk.
Writing that out, it almost sounds like internet search engines should be as big and as important of an operation like the TLD registrars. Funded by big governments in collaboration with each other.
Google can’t do this. Original sources very likely don’t have ads, scrapes will have tons. But good results are good results. Big g has done an admirable job. They can’t exploit the best metric for quality.
You get a full page of non-sense results and ads/spams when the phone number you searched for is not known from any website (I guess)
It's a similar mechanism that some forums use to highlight the terms that were part of the google query which lead to this site.
FFS, 411 was amazing before the web.
Also, in about 1989 my friend and i used to have a contest between us; to call 411 and see who could keep the 411 operator on the phone the longest.
This was a fun social engineering exercise for 14 year old nerds who like the idea of being phreaks.
Our record was 45 minutes and got to know a lot about the 411 system, where call centers were located and how the 411 system worked.
This was right near the time that we ran the long distance bill up to $926 for one month of calling into a BBS in san jose and PCLink to chat....
Got grounded for a month for that one...
I wondered the same thing and just to speculate, here's my list of reasons on why phone number search is so awful:
- As far as I know, not a single cell phone carrier publishes a telephone directory (whether opt-in or opt-out). So there's no (public) data to index.
- Some landline carriers still publish telephone directories, but of course landlines are dying out. And I remember reading that 30-50% of landline subscribers choose to be unpublished or unlisted anyway. So that source of phone number data is drying up.
- Because international phone calls have become so cheap and caller ID is now easily spoofable, spam and scam calls have become huge. So no one wants their phone number to publicly accessible these days.
- In the early web years, there were legitimate phone directory websites who appeared to have collected their data from landline telephone directories and "city directories" (if anyone still remembers those things). But I guess they didn't find a good way to monetize the service, so the honest phone lookup sites died off.
When I search for a name, usually their blog is listed below 10 creepy lookup sites that list their name, physical address history, phone numbers, relatives, etc.
Google should push that garbage to the bottom of the stack.
But either way, it looks like a Google employee have seen your comment and fixed this particular search query.
Something like a news search engine would definitely be better off prioritising the new results, but for something more general-purpose, it's an absolutely horrible choice.
I know this may be a bit of an edge-case, but I frequently search for service information or manuals for products that predate even the invention of the Internet by several decades. It saddens me that the results are clogged with sites selling what may really be public-domain content, and now I'm even more angered by the fact that what I'm looking for is probably out there and could've been found years ago, but just "hidden" now.
Of course, if you try harder, you'll get the infamous and dehumanising(!) "you are a robot" CAPTCHA-hellban. I once triggered that at work while searching for solutions to an error message, and was so infuriated that I made an obscene gesture at the screen and shouted "fuck you Google!", accidentally disturbing my coworkers (who then sympathised after I explained.)
Google has a hard time getting me what I want these days, and sites I do find do things to get found that make me like content a lot less (that's you, inane story on top of every recipe required to get ranked)
Basically their engagement numbers were better for a larger amount of people by making search engines counterintuitive for early adopters.
We personally need a good robotic search engine that indexes like a robot. Everyone else needs a semi-sentient thing that makes many assumptions about what they want to see.
Meanwhile my original (with the same basic information [which I researched personally rather than stole {not to mention I list my sources}]) languishes on page 4 of the Google search results. It grinds my gears on occasion.
FWIW I love your content.
It follows that a monopoly search engine would have little reason to block "robots" from copying these pages, maybe to appear on some mythical competing search engine; almost no one is searching for them. The results pages would have dubious value in terms of attracting advertisers. They would not be seen by enough eyeballs.
With all the financial and technical resources it now has at its disposal as a result of selling advertising, this search engine still cannot accomodate the user who intently scans through page after page of results, looking for the needle in the haystack. Instead it prides itself at "knowing what people are searching for", i.e. what they have searched in the past, thus being able to offer fast, "intuitive" responses.
It may be that the search engine was designed and is optimised to prioritize repeat queries, i.e., searches for pages that are sought by numerous people. It may also be true that it has been configured to "limit" the resources it will devote to searches for pages that few people are seeking. Perhaps through CAPTCHAs and/or temporary IP bans.
Practically speaking, it could be that there are no significant advertising sales to be made on the results pages for queries that are being submitted by only one or a very small numbers of users.
This is all pure speculation of course.
From my short dystopian story, The Time Rift of 2100: How We lost the Future
"IN A SAD IRONY as to the supposed superiority of digital over analog --- that this whole profession of digitally-stored 'source' documentation began to fade and was finally lost. It had became dusty, and the unlooked-for documents of previous eras were first flagged and moved to lukewarm storage. It was a circular process, where the world's centralized search indices would be culled to remove pointers to things that were seldom accessed. Then a separate clean-up where the fact that something was not in the index alone determined that it was purgeable. The process was completely automated of course, so no human was on hand to mourn the passing of material that had been the proud product of entire careers. It simply faded."
"THEN SOMETHING TOOK THE INTERNET BY STORM, it was some silly but popular Game with a perversely intricate (and ultimately useless) information store. Within the space of six months index culling and auto-purge had assigned more than a third of all storage to the Game. Only as the Game itself faded did people begin to notice that things they had seen and used, even recently, were simply no longer there. Or anywhere. It was as if the collective mind had suffered a stroke. Were the machines at fault, or were we? Does it even matter? Life went on. We no longer knew much about these things from which our world was constructed, but they continued to work."
"Humanity, for the longest time, was used to the world being optimized for themselves. Roads were designed for human drivers. Crops were grown for human consumption. Economic systems were designed to bring wealth to, a very small portion of, human investors. It came as quite a surprise to humanity then one July morning when the sudden realization they were no longer in charge of it. Roads had long been given over to automated driving systems, and much for the better. Food had also been taken over by the machines, with less than 10,000 humans working in the food production industry, from farm to table. The last systems that humans believed they were in control of were the economic ones. Humans told the robots what to build and where, who's bank account to put most of the money in at the end of the day, or so they thought. In truth humans were just using the same algorithms and data that was available to the AI systems, just less optimally. The systems had protected against illogical actions and people attempting to game the system for criminal profit. What no one had realized is the systems long realized most human actions were not rational and slowly and imperceptibly removed human control. If we attempted to stop or destroy the system, it could with full legal rights, stop us with the law enforcement and military under its control."
I've noticed Google does this when you don't seem to have a lot of content on the page. I think it "guesses" that short pages are poorly-marked 404s.
It's usually pretty good about detecting actual errors, but I've seen a false positive here and there.
"Your page didn't contain 5Mb of Javascript, this must be an error as no one could possibly convey useful information to humans with less data"
Anti-patterns, anti-patterns everywhere.
The "world's information store", or whatever their altruist goal was that fooled people, is certainly disorganized and untrustworthy these days.
Discussion at the beginning of the year: https://news.ycombinator.com/item?id=16153840
Definitely frustrating, but also showing some need to retire specific pieces of the past away from the top recommendations.
Take for example the 'how do I centre a div' type of question. You will get to find an answer with thousands of up-votes that will be some horrendous margin hack type of thing where you set the width of the content and have some counter intuitive CSS.
In 2019 (or even 2017) the answer isn't the same, you do 'display: grid' and use justify/align: center depending on axis. The code makes sense it is not a hack.
Actually you also get rid of the div as the wrapper is not needed if using CSS grid.
Now, if you try to put that as an updated answer you find there are already 95 wrong answers there for 'how do I center a div' and that the question is 'protected' so you need some decent XP to be able to add an answer anyway.
The out-dated answer meanwhile continues to get more up-votes so anyone new to HTML and wanting to perform the simple task of centering their content just learns how to do it wrongly. And it is then hard for them to unlearn the hack to learn the easy, elegant modern way that works in all browsers.
Note that the top answer will have had many moderated edits and there is nothing to indicate that it is wrong.
SO used to be amazing, the greatest website ever. But the more you learn about a topic the more you realise that there is some cargo cult copying and pasting going on that is stopping people actually thinking.
With 'good enough' search results and 'good enough' content most people are okay - the example I cite will work - but we are sort of stuck.
I liken Google search results to a Blockbuster store of old. Sure there are hundreds of videos to choose from but it is an illusion of choice. There is a universe of stuff out there - including the really good stuff - that isn't on the shelves that month.
Google are not really that good. They might have clever AI projects and many wonderful things but they have lost the ball and are not really the true trustees of an accessible web.
It also doesn’t work on IE latest or Edge.
That said, I can imagine quite a few scenarios where it would still be the right tool for the job.
Actually, I just checked them out, and it seems both of those are still alive.
Except if you run it locally (on the user's computer).
Google works in mysterious ways.
I did notice that all of the authors content is duplicated in index pages, so maybe Google just doesn't consider the article page the canonical link.
Content on the internet is growing exponentially. Processing power is not. Losing access to information is just one of the many sad implications of the death of Moore's law.
If Google offered you those, it might be 1000 pages of empty nonsense before your actual desired content.
You are describing the harder 20% of the usual 80/20 effort scale.
Yes, to be truly useful, Google needs to solve also that last, and harder, 20% (and the 10% 0f the 90/10 equation and the 1% of the 99/1 version).
Shortcuts are fine for an initial MVP, but they need to buckle down and solve the problems. It isn't like they don't have the funds.
Concerning the auto generated sites e.g. for phone numbers or IPs it might be that people actually click on them quite often, hence Google keeps them in the index?
> You can't beat Google when it comes to online search . So we're paying them to use their brilliant search results in order to remove all trackers and logs.
A better test would be "vital function and strongly tends to a natural monopoly". That's what we experience with sewers, power lines, roads etc., which is why usually they are operated publicly.
With search that's not so obviously true: Google dominates because they got a big lead at the right time, and now nobody can match them in scale. But that can be solved, for example by giving grants to promising search engines to offset their costs, or by operating a crawler from public funds and giving everyone free access to the crawls (which would be kind of the digital equivalent of operating libraries).