The ping attribute basically adds click tracking as a native browser feature so you don't need to do URL redirects. It also makes these analytics much easier for the site and mysterious to the user. Looks like most vendors besides Firefox support it. (They were pretty opposed I recall)
If you're a Chrome user, there's some extensions that disable ping requests/link auditing [1]. (EDIT: a commenter noted that uBlock Origin already blocks these! So I recommend that over this obscure extension)
[1] https://chrome.google.com/webstore/detail/ping-blocker/jkpoc...
https://html.spec.whatwg.org/multipage/links.html#hyperlink-...
> 2. Optionally, return. (For example, the user agent might wish to ignore any or all ping URLs in accordance with the user's expressed preferences.)
The problem isn't that Firefox doesn't support the ping attribute, the problem is that Google fails to respect user requests not to track.
> When the `ping` attribute is present, user agents should clearly indicate to the user that following the hyperlink will also cause secondary requests to be sent in the background, possibly including listing the actual target URLs.
> For example, a visual user agent could include the hostnames of the target ping URLs along with the hyperlink's actual URL in a status bar or tooltip.
Does any browser supporting pings actually do that??
Also the "Note" in that section provides a decent argument for supporting `ping`. Basically, users will have their clicks tracked anyway, but the `ping` attribute provides more transparency and a better user experience. Though the transparency part is debatable given browser implementations.
Although I admit some searches I have to send to Google to get the result I'm looking for.
Also probably worth noting that the W3C doesn't maintain an HTML standard anymore[1]; the WHATWG standard is the definitive one.
Something that's in the spec that matters (WHATWG) but not the one that desperately pretends to still have relevance for HTML though it hasn't since it tried to push XHTML 2 (W3C) isn't “rejected” or “nonstandard” in any meaningful sense.
Google search is good because it tracks what links people click and knows when they come back to go to a different url on the page. if 99% of people visit the top result for a query, return, then hit the second one, chances are that the top result never answers what the search query asks.
[1]: https://securityaffairs.co/wordpress/83890/hacking/ddos-html...
At least with the mangled link approach it’s easier to tell that tracking is going on, but that ping attribute seems extraordinarily sneaky to me. I get that it enables “clean” links but the opaque tracking is way worse in my eyes.
Sigh.
Edit: when I think about it, I guess it’s not that dissimilar to what you can do with JS based tracking anyway, so perhaps it’s not really any worse than what already exists. But it still feels wrong for some reason.
Otherwise I love ClearURLs.
I bet anyone would prefer the ping method rather than the redirects we see on Firefox that mangle copied urls.
You seem to be of the opinion that no tracking would be better. And that's fine and a popular opinion around here. But that's not an option as Google relies heavily on the clicks as an input for ranking.
So in a context where you consider tracking HAS to happen ping does offer advantages for the user.
This attribute doesn't do anything shady, and would do the opposite if it were actually used. The whole idea is to be able to provide the tracking data the site will get one way or the other, but with the ping attribute, you can do it without mangling the URLs.
I thought I was the only one.
Google generally does not allow POST method for user-iniatiated queries, e.g., from HTML forms. However POST method is commonly used for tracking.
Using the web today with a "modern" browser and trying to exercise the slightest amount of control is like being in a Spy-vs-Spy MAD magazine comic strip.
[1] Example: a search result that's a pdf. How do I share this link? If I click on it it downloads to my disk. If I rightclick on it I can copy a crappy URL.
This is a pattern of behavior. For example, when reading an AMP article hosted by google.com, an iPhone will correctly show `google.com` in its URL bar. Whereas an Android will conveniently rewrite `google.com` to show the URL of the article source, falsely implying the browser connected to a hostname that it did not.
In any other context, it would be called phishing.
A good moment to remind everyone: the goal of the attention economy is to make things as inefficient as possible, because money is made on the friction.
In this case, though, Google doesn’t mind frustrating users because there is no competitive alternative search engine.
My alma mater switched its mail accounts to Outlook365. Now all links in email messages - including text emails - are mangled to go through microsoft's servers. And they're humongous-long too!
In the google search results, click the three vertical dots above the link and to the right of the domain. If using mobile, you'll need to switch to desktop mode to see the three dots. After clicking, an "About this result" pane will pop up to the right, probably[1]. In that pane you'll see the true link, and you can Right Click > Copy Link.
[1]: On my computer, the "About this result" pane says "BETA", so not sure if everyone can use it. It works for me in a private window, though.
It might be that one of the reasons they track link clicking is to determine “bounce rate” [0] to infer how useful the result is. That is something I would want to know if I were building a search engine and wanted to verify ranking accuracy. Though I would have thought there would be better ways of tracking this than url redirection if JS is enabled.
0: #5 on https://www.spyfu.com/blog/improve-google-rankings/ (I tried to find a more authoritative source but didn’t have much luck. If some one can find a better one, please share.)
- It’s not clear to me if google actually uses bounce rate to rank results aside from generic mention of identifying “signals that can help determine which pages demonstrate expertise, authoritativeness, and trustworthiness on a given topic.” [0]
- google does track sites you visit and URL redirects may be a way to achieve this.
> My Activity is a central place to view and manage activity such as searches you've done, websites you've visited, and videos you've watched. [1]
0: https://www.google.com/search/howsearchworks/algorithms/
1: https://support.google.com/accounts/answer/7028918?visit_id=...
(Disclosure: I work for Google, speaking only for myself)
The problem is Google has implemented the tracking in such a way that it's hostile to users, preventing them from copying the target link. There are various ways that Google could allow copying the link whole also enabling tracking if it's explicitly clicked, but Google chose an anti-user option because it almost guarantees people click the link.
I stopped using Hangouts a long time ago because every link that was sent was wrapped in a redirect through Google. Sometimes that tracking service would be slow and I'd have to copy the links manually. Really infuriating.
Anyways, there is a big difference here. If I copy a link from Google I get [1] and if I copy a link from DDG I get [2].
1. https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&c...
Also, DuckDuckGo has taken keyboard-centric users into account, whereas Google has not. I rarely have to touch my mouse when searching on DDG. Up and down arrows to select the results, enter to go, / to return to the search bar, left and right arrows to go between images and maps and whatnot.
I almost use DDG solely for the UX. The added privacy is just a very nice bonus.
(Tip: you can also type "figlet ______" and it will give you ASCII art for the text you typed. That's neat.)
I wonder why this is even necessary. User scripting should be a standard feature of browsers. We should have direct access to a complete Javascript environment every time we launch a browser. Just like Emacs gives users a Lisp environment.
https://outgoing.prod.mozaws.net/v1/8a4c4de845953bc85d10c6465c5c0f11210b5ca1c195b70d7ddfcf8b74592477/https%3A//clearurls.xyz/
and https://outgoing.prod.mozaws.net/v1/8a4c4de845953bc85d10c6465c5c0f11210b5ca1c195b70d7ddfcf8b74592477/https%3A//wiki.clearurls.xyz/
respectively, instead of going to the page directly.Why Mozilla? Why are you tracking us ? Of all the pages...
Some google links (notably shopping links for products) don't just point at a google-owned redirect (presumably for ad tracking/payment calculation?), they also change the link target on click (?!?evil!?!). There are redirect-removal addons which re-write the original URL correctly, but the on-click handlers mangle the target of the link if the event is not blocked.
On-click event handlers should never have been allowed. Hijacking the browser UI is never in the users interest.
[1] https://searx.me
Btw the list of public nodes is here: https://searx.space/
What is the difference between SearXNG [0] ("next generation," i.e. the one you just linked) vs. SearX [1]? NG claims to be a fork, but it's not clear why? The main SearX has recent development activity.
That can be achieved using the Privacy Redirect [1] extension, set it to redirect search engine calls and it will use a random engine. The list contains more than just instances of Searx and can by default not be edited by users so you might have to get the source [2] and build a version with only those search engines you want to use. It can redirect many other corporate entities like Youtube, Twitter, Instagram (which does not really seem to work but since I never go there anyway I don´t really know), Reddit, Maps (Google etc) and others. I have it redirect to private instances of Invidious (for Youtube), Nitter (for Twitter) and LibReddit. I do not use search engine redirect since I run a custom Searx instance which doubles as an intranet search engine and as such offers more than any public instance.
[1] https://addons.mozilla.org/en-US/firefox/addon/privacy-redir...
One thing I dislike, however: making you agree to terms when sharing an invitation link with friends. You'd think they would want that to be zero mental friction?
Why do they bother not implementing pings when they allow an equivalent privacy invasion to continue? Either way the user's privacy is invaded, but at least URL copy/paste still works correctly with the ping functionality.
[1] e.g. https://bugzilla.mozilla.org/show_bug.cgi?id=229050 though I'm sure there have been many other bugs filed on it.
The code doesn't prevent event propagation, instead it copies the link before propagation happens. I guess this way is more reliable. It works on other sites too, like FB.
It's also kind of a bad web search experience to have actual web search results hidden below ads and Google properties.
On the other hand, "Google search" is very good for searching Youtube and other Google properties.
Use DDG (or some other search engine)
Good search results, with privacy
javascript:(function(){window.addEventListener("mousedown",(event)=>{event.stopImmediatePropagation();},true);})()
(I mostly do this when google's redirect page lags for some reason)
i always just assumed it was for improving the index. the more a result gets clicked, the more relevant it must be.
it's kind of a zero'th order optimization.
how? the more clickbaity Yes, but how do you judge quality by action of the uninformed (clicking before viewing content)?
you could do the same for people. first off, a user looking at a search results page isn't uninformed, there's lots of signal in the results page for a search query: domain name, familiarity/recognition of domain name, abstract text quality (grammar/spelling), abstract text, spamminess, etc. for the trained eye, that's a good amount of signal, but who has a trained eye?
you could, say, have some ground truth rated webpages that you have human raters rate in house, and then you could use this to score actual users on the website in terms of who frequently picks the known best result. now you have a cohort of users who you trust in terms of clicking on quality search results.
now you just pay attention to what this cohort pays attention to and let their clicks materially boost the ranking of results.
this is just one over simplified way, i'm sure they do tons of stuff like this (with tons of other stuff to avoid abuse/seo/etc).
the asynchronous ping attribute is new[1]. i'm pretty sure it was the mid-00s when i first noticed that search result links bounced through a redirect via google. (and i'm guessing it was added to reduce confusion when hovering and ameliorate copy/paste issues for search result links, but i don't know for sure)
[1] https://github.com/mdn/browser-compat-data/pull/9470 (april 2021)