There are a whole host of factors behind this, but I'm certain that the switch to Natural Language Processing / Semantic Search drove this decline.
“Product vs. Product” -> list of spam websites that just show them side by side with ads.
“Product Review” -> same as above, spam.
“Product referral code” -> sites with spam and no codes.
Interesting web content is in Reddit (and youtube), Google exists as a tool to search Reddit since the “new” Reddit site is awful.
Programming questions are mostly stack overflow, but occasionally you get a useful blog.
General knowledge is Wikipedia, news is axios.
Amazon for most purchases (except for products that have their own brand and sites that are at risk of being faked). Etsy for boutique stuff.
Most of the rest of the web is junk.
I think we could probably go back to curated 90s era web portals and have a better experience.
Is it safe for cats to eat X?
Cats are mischievous and we love them.
X is not typical cat food. Let's go over some background on X before we answer the question.
Cats are obligate carnivores.
Thanks for reading, make sure to subscribe or buy these products!
Google today limits how much of "the web" (their index) they will let you see.
You are only seeing what they choose to allow you to see.
Google, not the user, determines "relevance" and Google automatically excludes results. in theory this sounds useful. In practice, Google is now limiting max number of results to 200-300 or 500 if you add &filter=0. Retrieving 501 search results for a single search is not allowed. Sorry.
Try a search for some common phrase like "the web". Surely this phrase occurs on more than 500 web pages. Yet Google will limit you to only 231 results. Does that represent the entire www. Then you try "repeat the search with the omitted results included". Google then limits you to 466 results. WTF. What if you searched page titles for some string. Is every string you search going to be found in less than 500 pages.
Google search results today are not representative of the entire www. Not even close.
If I ever X vs Y in google, I append site:reddit.com
I'm not a huge fan of Reddit, but it's about the largest forum. I really miss Forum search in Google.
Usually: voted on/curated by members who are specialized in some way (reddit, so, hn, wikipedia to a degree).
The fallbacks are editorial sites.
If you make a search engine that focused on indexing user-curated sites + results outside of that that are themselves curated/voted-on by your search engine users (ie, you and others could upvote axios and Amazon as a good source), I think you'd have an interesting model. Basically, take the HN/reddit model to search itself.
I think the future of search is in finding trusted results, such as those upvoted by reddit or HN or SO, assuming the upvoting system is robust enough to withstand spambots etc. The question is whether a big enough fraction of the population actually care to get good search results.
You are absolutely correct. So many of the top results are basically machine-generated pages, perfectly optimized for search robots, not so much for humans.
1) Bring in tabbed search result like Cuil search used to have. Easily lets you go to similar/variant topics from UX POV and must be a good way for a search engine to learn what people want more.
2) Add a 'dont show this site' option. So they can easily see when people get annoyed with a result and use that as a ranking signal against search terms. E.g. if people press that for a particular KW but not usually Google knows they got that result wrong and if people press it for almost everything you know its a site people dont want to see. And at a personal level its great to custom remove stuff, like when Pinterest was dominating so much a year or so ago.
Would be a great way to community curate results.
Probably some financial positive feedback loop is responsible for this.
I tried this with a bunch of products ripe for spamming: instapot review, nvidia 3070 review, and Apple Watch review, and the first page results were almost entirely reputable.
I also tried “huggies vs pampers pull ups” and the result is quite a good variety of forum threads, blogs, and reputable articles. “Quasar formation” leads to Wikipedia, but also academic articles and astronomy.com. “How beer is made” is even better quality.
Is most of the web junk? Have I gotten astronomically lucky in not finding it? Are these somehow the exceptions that prove the rule?
(Ironically I think my search terms caused this comment to be flagged as spam..!)
I’ve found this to no longer be the case after Reddit has starting doing nefarious tricks with the date of a post.
For example, I’ll click on a link in Google showing it’s from last week when in reality the post is 2 years old.
Once I took the time to read one of the sites and it’s obvious it’s generated content, and awful at that. I still reading pros and cons of one of the product and the same thing was mentioned under both, just with different wording.
GPT-3 or similar is going to make this way worse.
The only time I still use google (via !g on ddg, what an awesome feature) is to search for local info or in native language. But even here ddg is getting better, especially if I append the name of my city or country.
I didn't realize that I'd migrated over to this. All my searches now are 'xxx reddit' or 'xxx stackoverflow'
That's true. But out amongst that is the result you actually wanted that google doesn't have any incentive to lead you to.
Back in the 90s, search engines were driven largely by sparse vector representations of the documents such as TF IDF vectors before latent semantic indexing, topic modeling, and other dense vector representations like sentence embeddings entered the fray (not to mention non-content features that use the web graph, click stream, etc.). A lot of NLP applications use a mix of dense and sparse features but it's hard to get the balance right in a way that works for all inputs. Google's pendulum has swung too far in the "dense" direction, as it certainly seems "dense" a lot more often lately!
I think it's a popular opinion on HN that Google search sucks, but I just don't agree. I used DDG on all my devices for the better part of last year, and bailed when I noticed by g! usage ratcheting up.
When people are trying variations on a search it's a clear sign of failure. They should do something about it, like use a verbose natural language interface or select a different strategy for ranking or enable the exact in-depth mode. Apparently the NLP community can do natural language Q&A in papers, but Google can't do it in search.
Other pain points: searching in depth all results on a query, not just the top skim and remembering context between searches.
And Google Assistant itself is too poorly integrated and dumb. Where's the GPT-3 like language skill? So many TPUs what are they doing all day long? They have more text and images than any other entity on the internet, Google bot has been sucking it up for so long (not to mention links, keywords and clicks). It should show in the quality of their AI. It's a shame to have OpenAI steal their thunder like this.
While it’s an unpopular opinion on HN, there’s no denying that from a user perspective, that’s only a good thing.
Which is ironic, because one of Google's original innovations was to make a space an AND connector instead of OR (which its competitors used to maximize result counts). Back then, they understood that fewer, more specific results were better.
The web of 2021 sucks.
A lot of people say this is because Google is worse at returning the desired search results for various reasons (algos, ads, spam, etc). I've come to suspect it is more deliberate on the part of Google.
I've said it before, so I'll belt out the chant again: "Google doesn't make money from providing the right search results. Google makes money from keeping you searching for the right search results."
This is especially true for the vast majority of people on the internet who do not know there are any (better or worse) alternatives to Google.
That felt like a thick wall of glass separating worlds have suddenly come into my view.
I guess it's time for something new to do the same thing to Google that it did to those companies years ago.
I'd much prefer going back to something like that now than bother with Google's approach; depending on the search term, I already know what the top sites are going to be, and I know they won't have what I'm looking for.
This feels very true... the majority of my searches go like this:
1. Search with all relevant terms with some parts that should be exact combinations in quotes => nothing or BS results
2. Remove words that Google is fixating on without context => no or seemingly unrelated results without search terms at all
3. Reduce specificity to 2 or 3 words, topic and subtopic (trying to locate context only and search the rest myself) => sometimes ends me on sites where I can then just browse for the result i'm after
4. Worst case reduce to single search term to find huge context sites and manually search myself. Sometimes at this point I just give in and manually navigate to other sites where I know I can manually browse and narrow down context myself and then try to find what I'm looking for... it really feels like a curl web scrape and grep would work better than Google at this point (yes I know google "site:", it's doesn't work properly anymore).
I can't say I've really counted how many days that's true. I can't even say that I really remember what searches I did yesterday or the day before. But if I'm not totally weird and google searches were dramatically better in the past, then they must have produced the desired results every time on almost every day.
Same for me, but that's because I just stopped searching for things that I know will not give me good quality results as they used to be circa pre-2010 back when search tools like +, -, and "" still worked and the results weren't filled with generated SEO texts.
Oh, I forget some people doesn't know why Google enjoy their current position:
Back before 2007 they completely blew competition out of the water:
If something was accessible on the Internet and wasn't behind a noindex spell, Google would find it.
Compared to other search engines that both then and now work more like Google does today it was totally amazing.
Then things started to go sideways:
- first there was: did you mean <something else with similar spelling>? (this was actually user friendly)
- then there was: we didn't find many results for <search term> so we included <related but different search term>, use double quotes to search for "<search term>"
- then there was fuzzing: expnding all my search terms into the unrecognizable unless I double quoted them
- and the latest few years they have also ignored my double quotes
somewhere in between there they messed up the + operator that used to mean "make sure this term is included" as well as ~ that used to mean I wanted Google to fuzz that term.
Sometimes I can get better results by trying to think how my wife would phrase the question, i.e. instead of searching for
- <search terms including a weird mispeling from a dialog box>
I search for
- <why does my computer show search terms including a weird mispeling from a dialog box>
But other times nothing works.
https://saraacarter.com/google-whistleblower-state-attorney-...
This should destroy Google, but for some reason it doesn’t, and it baffles me that they are still surviving in this.
I don't blame Google for this. 99% of alternative medicine is quackery and it's almost impossible to separate wheat from chaff in an automated way.
I suspect it is because the best sources of information do not serve adds, but news, spam, and social media do.
If I'm looking for a review of a Yeah Yeah Yeahs album, I will see links to Amazon, Pitchfork, Rateyourmusic and other sites well before I find something written by a regular person on their personal blog.
Same goes for products. I will see a page of Amazon links before anything on say, a boutique Ecommerce website with a Shopify backend.
https://www.google.com/search?q=See+them+more+than+my+friend...
I don't know what other people see, but I just get SEO garbage listicles. Nothing even remotely relevant. Add "lyrics"? You get lyrics..... for other songs! https://www.google.com/search?q=See+them+more+than+my+friend...
You have to quote the entire string! Which won't work if you misremembered a word or two.
I don't understand what goes on corporations. I guess power users aren't a target demographic anymore.
One word of caution: I use verbatim mode and I regularly (but not always) get Wikipedia clones as my top result. It must be skipping some of their anti-spam filtering.
For those who don't know, after a search, click on "Tools", then "All results", and select "Verbatim".
* shows results despite having zero actual matching results
* ignores parts of the query even if they are surrounded in quotes
* gets completely thrown off by keywords (and, or, etc) and symbols (=, ;, etc)
But this needs to be public effort to create any meaningful action from search engines.
How many of you would be willing to contribute to such platform by submitting your bad search results?
[1] https://needgap.com/problems/207-search-engine-wall-of-shame...
I thought the main issue here was Google directing searchers to its own services and scraping data from non-Google websites to present on its own pages? It's declining search result quality is certainly an issue, but I don't think this link is about that at all.
I don't think it has a lot of to do with new search algo. I think it is the opposite, the algo is mostly the same as a decade a ago, but now it is abused by SEO expert. And Google doesn't seem to care yet
Specifically, it's handy to have the query, and the URL that you think would be the best result.
Google engineers can then peer in and see why that URL didn't rank highly.
The most common reason is it's behind an invisible captcha or blocked by robots.txt, so do check that.
Lowes has made a video, so have dozens of contractors. It's amazing how tool rental firms have not advertised on there ... actually that is a bit weird...
SideNote - for some reason I replied to an email outreach from Rand, and he actually replied back within a few hours, clearly having read my words and come up with a real answer.
I was frankly shocked. Either he works an inhuman amount or he has secretly invented AGI to respond to his emails for him.
Kudos and good luck
Oh, and then there's a bunch of weird links that come up to obvious content generated by AI being peddled. Odd.
[0] https://en.wikipedia.org/wiki/Search_engine_optimization
I have ZERO doubt in my mind that a Google search update that decreases their ad revenue would be rolled back and metrics tweaked to make it so. So...
Mostly ignoring what is on the sites
It's the switch from lexical/syntactic search to semantic search. There are benefits and drawbacks to each and it would be nice for Google to give you the option to try each.
I wager that most people don't perceive a decline in search results quality. At least not to an extent where they'd notice and switch to using Bing.
So GP may not be "an unprofitable long tail". GP may in fact be "a profitable typical long tail for whom Google Search is frustratingly unhelpful".
Google giving up on difficult searches and focusing on showing promoted links, AMP sites, and "rich" result boxes is probably a sound business strategy too, and is unlikely to make users switch search engines this late in the game (Bing/DDG don't seem to do better anyway).
Google decided they liked a lot of the content and started providing it in their knowledge graph. It was initially great to be validated and the link to my content at the top of the search result was pretty cool to see. But it did tank traffic, as Google scraping my content and giving it away at the top of their search result page meant people didn't need to navigate to my site.
Felt pretty terrible, if I'm being honest, but I also have always subscribed to understanding the risks for relying on other networks for my own benefit.
BUT!!!
As a regular user of search engines, I love getting the answer super fast without necessarily having to guess what sort of ad-trap I'll have to navigate to get the answer I'm looking for on some random content site.
As a user, when I'm asking some non-critical question, it's nice to just get a snappy answer. I appreciate it.
The problem of course arises when content creators stop making the content for Google to scrape and index, what then?
This has already happened. People put content on their sites only long enough to figure out what works, then turn that content into a book that can be sold. Every Cal Newport book came out of his blog posts.
Others are turning content into paid newsletters, courses and Patreon-only walled content.
Wrote something 15 years ago? Just set the "Date Modified" to last week and Google will think it's been updated.
The 'content' you and I are seeing are just second and third-hand summarizations of popular topics written by content marketers.
...which is awful! Try searching for historic information on a viral topic and the results are littered with incorrectly dated listings that may satisfy the search criteria but are from the wrong time span.
I don’t see it as anticompetitive so much as copyright violations. The fact that Google makes advert revenue using this strategy just contributes to damages.
Anticompetitive would likely require Alphabet (and potentially other companies) from preventing users from accessing the original web page (which might be possible but more difficult to prove).
The financial sources for information-collection-and-creation undertakings throughout human history have been manifold (wealthy patron and collective financing, to name two). It is entirely possible, if Google finds its data sources drying up, that Google itself will pay for data collection (they already do this with several categories of data they host).
Which I think says a lot about how Google is used now. I don't use Google to find web pages, I use it to find an answer to a pressing question. When I'm looking for something to read, I go to a content aggregator like HN or Reddit.
Of course this isn't sustainable. If Google is just presenting other sites' data "for" them, and thus depriving them of traffic and revenue, eventually there won't be any incentive to create the content Google is scraping. Long term, this seems self-defeating for Google, just as its ruinous to the sites they scrape.
It's an interesting hypothesis, but it assumes the only reason people bother to seek and aggregate facts is to get paid (and the only way to get paid is click-through to ads). No doubt that early surfacing of data could disrupt payment-for-visit-dependent models, but that doesn't guarantee the data evaporates from Google's view or that new payment models don't arise to replace the old.
You want to be able to force google to take your content, AND force them to pay for being forced to take it.
A better idea would be coming up with plans on how to break up googles market share to split it among multiple smaller search engines which may allow you to bargain for paid content with.
I Google things for information which I can glean from the search results more than looking for a website.
Like what's the definition of a word or a quick calculation is easier than opening the calculator.
There's even a metronome if you search metronome.
Likewise, but with Google's own search, not safari.
The company mission statement is "Organize the world's information and make it universally accessible and useful." Nothing about that statement says "redirect web traffic to third-party sites." They consider it an inconvenience to the user (a negative signal) if the user has to click through.
No doubt there's some truth to that, but I'd like to indulge in a slightly different flavour of Google bashing on this occasion.
The reason I often don't click through on search results is that the search results are garbage - either irrelevant or poor quality. I'm in the midst of refurbing and redecorating my house. Often I'll need to do some research on topics related to that, and when I do that I often find I need to refine my search query to find anything of relevance, even sometimes digging through several pages of results manually to find a really useful page.
And then there's work: I still haven't found any answers on this, but one of the things that's on my mind about becoming a more senior business leader is that I feel like I'm changing as a person. I feel like the way I think about problems and people is changing. That's to some extent to be expected, but the issue is I'm feeling ambivalent about some of the changes I perceive.
So it's about the effect that leadership has on the leader (selfish, I realise).
But when I start searching around this topic, what does Google want to show me? Pages and pages of results about change management or, when it's being marginally less of a village idiot, pages and pages of results about leadership styles and changing leadership style. Neither of these is what I'm talking about.
A human will understand that, but Google doesn't, and I'm finding that increasingly to be a problem when I'm looking for information: either, (i) my results are overwhelmed with low-grade spammy SEO'd to hell and back content, or (ii) Google's AI is too bloody stupid and pig-ignorant to understand what I'm talking about.
Hence I don't click through on the search results the majority of the time.
The worst thing is that content creators are manipulating this to build content farms full of low quality content and overwhelming ad volumes.
I miss the old web where Google was really great at finding that one guy's blog post that was actually useful.
Dunno, but this might be the sort of perspective you're seeking: there's a book https://en.wikipedia.org/wiki/Moral_Mazes about the culture of managers at large corporations. And a series of posts on it -- I'd start with the one near the top that's just quotes from the book, to see if it's you're after: https://www.lesswrong.com/s/kNANcHLNtJt5qeuSS
"nifarious" - oh, it's spelled "nefarious"
"1000 USD in CHF" - aha, about 1000 swiss francs
4 tablespoons in cups - 1/4 cup
"how old is taylor swift" - 31 years; this shows up in the suggested search dropdown, so you don't even need to hit the result page
Disagree. It's completely relevant. There are two reasons why people do not click on search results - either because they got the answer directly from the rich snippets (like you suggest), or because the results are so trash that you need to now perform another search to narrow the results further.
For instance, searching something like "what is the most common car color" shows me an exerpt of the answer from a article on automobilemag.com without having to click and read the whole thing.
5-7 years ago, I would've had to click and skim through an article.
Moreover, Google compiles independent lists of their own. Searching "what animals mate for life" or "movies tomorrow cruise has been in" will pull up an interactive list. No click-through required.
Good search was historically one of the harder unsolved problems on the web, even back in the 90s. The problem is that nowadays we treat it rather like a solved problem when in reality it's still probably one of the harder unsolved problems on the web.
Isn't that the whole point of Google providing it's own interpretation (sometimes several of them) of an answer to the search query ahead of the classic search results: give users what they are likely to be looking for without requiring another click?
> zero-click search problem
As a user, I see zero-click searches as a benefit, not a problem. It's a problem for people trying to use Google as a “clickstream” to waste my time and try to get a crack at my money, but then, that's exactly why it is a benefit to me.
I find Google Image search, which I used to love, filled with absolute garbage now. 90% of links are to Pinterest which greets you with usual overlays, signup modals etc. I use DDG's image search because I find Google's search unusable.
Also, Yandex image search is probably the best reverse image search on the market. It's crazy how much better it is than Google.
(YMMV: it depends on your specific searches, I'm sure.)
Considering 2/3rd of searches end there, I wonder how many people are walking around with false information that Google fed them.
However, Google's new method of extracting and displaying possible answers to queries from external sources instead of simply linking to those sources feels like it falls outside of the bounds of fair use.
[1]: https://www.everycrsreport.com/reports/RL33810.html
[2]: https://www.lawteacher.net/free-law-essays/copyright-law/a-b....
Google would probably make a "de minimis" defense of the practice. A previous case (Kelly v. Arriba Soft Corporation 2002) allowed thumbnails in search engines as fair use, but that's not quite the same. And current interpretation seems to be pretty restrictive on what's insignificant enough (witness various music sampling cases). And while any individual result may be small, doing so on an industrial scale is probably not "a trifle" and harm can probably be demonstrated. Ultimately, it would have to be tried to tell.
They might also argue implied license. In cases like meta tags, that's probably reasonable. For straight-up scraping, it's more questionable.
So, I mean, we're way, way past Google's heyday. It's not even close. But it's interesting to me that I can recall having those discussions that many years ago, and that people still complain about the same issue today.
In fact, just about every other search engine is better in my opinion.
I don't think that's the case. I think this has more to do with the fact that Google now deliberately surfaces as much information - including summaries, answers, tools and widgets - in the search results page itself, which incentivizes searching without a subsequent click. Google very much wants you to end your search journey on google.com, but I think that's because they believe it will keep people coming back. I do not think Google directly tries to keep people searching for the same thing.
This may still raise interesting antitrust concerns though.
Put differently, there is a conflict of interest inherent in the very nature of Google's business. It can only become a steeper death spiral from here if the motive continues to be profit above all else.
There are all sorts a little utilities that are now just part of Google.
- I'd make sure that multipurpose sites get lower ranks, ecommerce with a blog would get worse rank than a stand alone blog or an ecommerce. this would eliminate all the content marketing and owners would have to focus on their core business
- I would put a cap on amount of content that increases rank. A website with a million recipes harvested from other sites won't be better than a blog with 10 quality recipes.
- I would downgrade rank for use of third-party cookies, invasive ads etc.
- I would give users an option to "mute" a website
- Randomise top results to make sure no one can "occupy" top spots.
spell checking (term "wrng wrd")
dictionary (term "define word")
word translation (term "random in greek")
calculator (term "1+1")
unit conversion (term "1m in ft")
weather (term "weather tokyo")
sport scores (term "uefa scores", or ufc, nba, ...)
map instructions (term "instructions to nearest city")
For fun search any of the aforementioned terms alone (besides the first one) and with term mentioned in parentheses. Also lots of searches return card information (e.g. "spell checker" or "Issac Newton") or snippets which are seen when searching for instructions. That is when searching something encyclopedic where one-line summary or some simple instructions will suffice means someone will not go ahead and open Wikipedia or any other site.If I want to control the revenue then I use a different protocol.
I can also just simply set up a robots.txt file telling others what to do with it and lots of indexers, Google included, respect and abide by my wishes.
When people misuse a tool it seems easily fixed by them properly using the tool.
Trying to change the web because I’m too stupid to understand it seems like a bad thing.
* current time [city]
* [city] weather
* [city] covid
There are so many ads and now every time I search for something on Google, the first five results aren't even related to what I'm looking for. You search for "hifi headphones" and you get eight of the top ten searches are something like, "The top 10 hifi headphones." in an article from three or four years ago. Or "What you need to know about buying hifi headphones." informational articles. Not to mention the obligatory Amazon product link stuffing at the top of any product search results.
Google's results are just so convoluted, its a real PIA to try and wade through everything they're advertising in order to get to an actual product or manufacturing website these days - I just gave up a few years ago. Too much advertising and not enough organic results to be useful anymore.
If I google for the weather in Chicago, I'm not going to click on a link to weather.com for it, because the seven day forecast is right there.
If I google for information about a person, Google scrapes Wikipedia and puts that right in the results page sidebar.
Flight information, bus schedules, anything that looks like a calculation, a whole bevy of other things, Google just preempts any results and shows inline at the top of the search page. Why would you click through to anything?
I remember seeing a demo which accurately answered a lot of search questions like who killed Mahatma gandhi, etc just using the GPT-3 model(1). I'm sure Google must have even better models than this and it only makes sense for them to answer the questions directly when there is sufficient confidence level.
(1) https://twitter.com/paraschopra/status/1284801028676653060?l...
You could probably make the case that someone who would rather read the snippet on Google isn't likely to contribute, but it likely has an effect on the margin.
And that's ignoring the donate banner they throw up there from time to time.
And how many are no click because google has presented semi-correct but not really correct information at the top of the results so the user finds what they think they're looking for without actually clicking?
Seems like google is doing just fine so I'd imagine both of the above scenarios account for a significant portion.
While it's true that this is possibly killing sites by preventing traffic flow, it's one way to battle content padding for the sake of grabbing traffic.
It might be an interesting move if Google were to pay a small fee to sites whose information was useful for a particular search, like those I chose to expand.
My current hypothesis is:
Users who click ads and generate ad revenue like this garbage. Google learns to serve this outlier group of users. Killing the experience for everyone else is just a side effect.
after reading the article, the author doesn't correlate the findings with a cause, so who knows what the real reason for this is. but I know for me at least, it now takes me 3, 4 or 5 searches to find an abstract with something that looks even close. maybe I such at google searches, but after doing hundreds a day for over 12 years, I feel google search just got reaaaal shitty some time in the past few years.
ddg use to not even be close to google in result quality, now, imo, they are on par with each other. and that is largely due to googles decline, ddg only got slightly better.
"love monetizing niche search engines and other data products, but it looks like Google will eventually get into any industry where the main source of traffic is organic search, I wonder what is next."
This alone should be setting off alarm bells for everyone, the government should be raiding Google offices to find the necessary facts to answer these questions. Google operates as the de facto front page of the Internet, the default starting point for the vast majority of the public, and they're incredibly dodgy about the real facts about how they use that power.
It's incredible how often Google both states a "fact" that is extremely biased in their favor, and simultaneously suggest it's indisputable because only Google has data on it, and of course, Google will not share the data because it's confidential business information.
This stonewall is behind incredible lies like the idea that targeted advertising increases revenue by 50%, despite independent data finding the difference closer to 4%.
I think the issue at hand here, is that Google should be legally compelled to disclose the real answers to this question, and a lot of other questions about how their algorithms work.
> The data in the article comes from 100 million people dumb enough to infect their devices with malware
This is ironic, considering the Google Toolbar, headed by Sundar Pichai himself, is much of how Google got it's mass adaption as well as browsing data to improve their search rankings. Perhaps the difference between "malware" and "useful tool" is how successful the business behind it is. ;)
As a user: thank fuck i dont need to click through to those piece of shit sites that are nothing but spam and ad farms that front load fake description and title content
You can glean the information you were looking for from the results themselves. This is particularly true when you use google as a spell checker.
On the other hand I’m excited about other portals that will open up. This just cannot be how things end.
Mobiles phones are a walled garden for megacorps.
The purpose of Wikipedia is to improve human knowledge and all their content is CC so unless Google is altering content or biasing searches, sucking it into Knowledge graph seems like one of the main reasons Wikipedia started.
I have a hard time seeing this as a problem.
People who create and curate content quite often want to be paid for their work, they create businesses to support themselves, often in the form of a website. If Google scrapes and repackages that content, so the searcher never even visits the creator's site, what incentive does the creator have to continue making content freely available?
I would not be surprised if the vast majority of today’s Google searches are not being made by machines, scripts, etc.
Desktop vs mobile was mentioned but that doesn’t necessarily prove that the traffic is “human”.
Still think it does make for a better user experience overall, just wish they'd add these details to some of these Search schemas they themselves adopted for the purpose of getting more pinpointed info.
Imagine it could be done for under 10M?
That's not to say the results on Google aren't getting worse, because they are, but DDG still has a long way to go.
Sure, Google also has a flight tracker widget, but it's not that much worse than DDG. If you really want no widgets or inline responses, https://www.startpage.com/ has that.
EDIT: Startpage does have a time widget (search for "time in <some place>"), but doesn't have a calculator, timer, or literally anything else. How odd...
"Startpage acts as an intermediary between you and Google, so your searches are completely private. Startpage submits your query to Google anonymously, then returns Google results to you privately. Google never sees you and does not know who made the request; they only see Startpage."