My link: https://pbs.twimg.com/media/Byv5uWSIIAEf38C.jpg
Facebook made: https://pbs.twimg.com/media/Byv5uWSIIAEf38C.jpg?fbclid=IwAR2...
I guess if FB really wants, they could make a second fetch to ensure that their added params don't break the third party server. Or could they add a whitelist of domains that use their first-party tracking?
I really don't like the end result right now, looking like "the web works" from inside FB, but not when you try to follow this link out of it. I don't believe at all that that is FB's intent here, but it's just one more time that some silo breaks another part of the ecosystem, and to an untrained eye it looks like the third party is the culprit.
Regardless of the protocol - it's not unreasonable to return a 422 or 403 by default for that (malformed) request when first seen - as it would indicate that something sketchy may be going on - which can be whitelisted later
So if you are part of the group who sees this as a good thing, I'm genuinely interested to understand why you see this as a good thing and whether you view the mass surveillance of the general public by advertising companies as bad?
Previously, only large brands and national/multi-national corporations could afford to advertise at scale and reach customers through TV/Radio/Newspapers (that too with a high minimum spend).
Now your local mom and pop bakery could have a spend as low as $100 a month to reach their customers and help drive their business.
The world is not black and white, and neither is the morality of advertising.
I hope this perspective was useful to you.
Then, I started needing analytics for my own business. Without analytics, I wouldn't be able to sell with efficiency, and therefore, I wouldn't have a business. Granted, the anti-consumerist in me thinks maybe as a society we shouldn't be so concerned with our efficiency to sell. But, we live in a capitalist world, and I don't see that changing any time soon.
The way I see it now, I'm less concerned about tracking than I am about how big some businesses are -- especially in this space.
At every start up I know, they use analytics, and no one is doing anything spooky. But, I'm sure there's plenty of spooky stuff going on at the FAANGAMUs.
Where do you draw the line? This is parallel to the discussion around government surveillance. Just be cause they / you can, doesn't mean they / you should.
If Internet tracking had no potential use to governments then they'd be regulating the shit out of it. The problem is that governments want their own noses in the same trough, and so all these privacy-invasive technologies continue to be developed. The fact that it's not illegal means that anyone with the ability to implement it can, as long as they can sleep at night.
As to solutions that could help with "selling efficiency", maybe some kind of agreed tiers of analytics from benign to spooky that users can opt-in / opt-out of when visiting a website or using an app. Which GDPR is a bit of a kludgy solution for. The problem is that it only takes one bad advertiser to break agreed rules and the trust is gone again for all advertisers.
One bad apple.
Analytics are unquestionably useful. Collecting the data without user consent is what potentially should be regulated.
The general retort I hear is what do you have to hide? Like the only thing that people want to hide are the bad and evil stuff.
Facebook, Amazon, Apple, Netflix, Google, AirBnb, Microsoft, Uber
?
Edit: M for Microsoft, derp
There's a flawed belief that it's necessary. UX on the other hand suggests otherwise. AdTech is not concerned with UX though and tries to wrap targeting in some kind of pseudo user benefit—spin.
Good products and services sell even without tracking. Advertising is an economic powerhouse though and will always push for anti-UX trends because it fundamentally runs polar opposite to the user experience.
Advertisers study how to sell a product, and the most important product they have to sell is advertising.
However, it's not hard to reason why people whose livelihoods depend on being able to track users and increase the value of their ad inventory would be happy about this.
People are unusually good at separating their personal interests from consumer interests. I've observed this emotion arise in many entrepreneurs first hand, be they in the brick retail, or conventional energy or obsolete auto parts, it's common for people to be happy about events that benefit their livelihood even when it has a negative impact on humanity or that ecosystem.
Those people can go hungry or find another line of work. I have zero compassion for that behavior. Justify it how you want, but most people abhor it.
https://www.inc.com/peter-roesler/facebook-to-allow-for-firs...
https://digiday.com/marketing/wtf-what-are-facebooks-first-p...
Basically, FB is expanding its tracking, allowing 1st party vs. their third party cookie tracking. I suspect the click-id query string is part of that rollout. This helps it get around things like Apple's new ITP (Intelligent Tracking Prevention). 2.0 in Safari.
I’m pretty excited to see this roll out more broadly.
FB just doesn't understand the optics they create.
Sure they do; they just also know that the vast (VAST) majority of people don't understand the implications and/or don't care.
If this wasn't Facebook this wouldn't be news, gclid has been around for years.
A lot of malicious links are just base64'ed to another redirect service; to another base64'ed address (continue as long as your head can keep up.)
https://stackoverflow.com/questions/15090220/maximum-length-...
UTM parameters tag campaigns at the aggregate level, to be used for reporting. The fbclid is almost-certainly unique to the click. While you could make, e.g., utm content unique per-click that's not what it's for. Anything in a UTM parameter is intended to be human-readable, and will almost certainly appear as-is in a report somewhere. Click ID parameters are internal IDs used to join data sources, which is not the type of data that should go into UTM parameters.
Note that Google Analytics, the tool that invented UTM parameters, itself does not use UTM parameters when it does this sort of thing. Google Analytics uses the gclid (AdWords) or dclid (DoubleClick) to join against user or click level data from other tools.
The "fbclid" parameters on the other hand seems intended to track individual clicks. That is, Facebook wants to keep tracking individuals when they follow links to off-site pages.
Browsers will now have to resort to removing query parameters to prevent tracking. And websites should really use click-to-enable sharing buttons to prevent Facebook from snooping on everything.
Maybe among the URLs shared on Facebook there are a few whose servers only respond to a fixed amount of parameters, changing their behaviour when additional unused parameters are appended to the query string, but I imagine that the number of such cases is so low it's not even worth considering.
What exactly is Facebook breaking, in your opinion?
Would Facebook also break things if they were instead making an async request to the destination and appending a custom header to it, something like "X-Coming-From-Facebook"?
I don't get the part about async requests. What's the scenario?
Extra headers are typically ignored, not only since different clients send different headers since the beginning.
I know multiple systems which however decode the query string and complain about unknown options or don't accept a query string at all for some resources. On the later case it is ignorance on the other case it is intensive input validation.
An incredibly small number of sites might already be using `fbclid` internally, and an even smaller number won't be able to update their sites.
I am totally on board the don't-break-the-web train, but this just doesn't seem like a problem to me. Maybe once stats come out we'll see that it's a bigger issue, but... I kinda doubt it.
I entered: https://pbs.twimg.com/media/Byv5uWSIIAEf38C.jpg
Facebook made: https://pbs.twimg.com/media/Byv5uWSIIAEf38C.jpg?fbclid=IwAR2...
So sure, there may be an argument that the server should ignore that param. But it's absolutely false to say it "isn't breaking anything".
Adding a request parameter absolutely will break things. And they knew this. The only question was what's worse: Not being able to track some people or breaking some of their links. Facebook decided the former is more important to them and their customers.
And even if nothing else brakes, uglifying the URL people are posting is in itself an anti-feature.
https://www.facebook.com/business/news/facebook-attribution-...
https://marketingland.com/facebook-attribution-now-available...
https://old.reddit.com/r/adops/comments/9pycuk/facebook_atti...
This hn thread is a perfect example of a news bubble. Googling "fbclid" returns the answer in the first result, but hn votes up an article that has no information and treats it as some secret tracking that fb has implemented. HN is excessively biased against any discussion of tracking/analytics on the internet. The community allows no room for true discussion - only blatantly biased opinions.
https://www.reddit.com/r/analytics/comments/9o52yw/parameter...
Edit - reworded to be less aggressive
According to the metadata for the site, it was originally published on 2018-10-14 and last updated 2018-10-16.
Facebook's own article about the feature came out 5 days after this article was published. So, at the time, Facebook _was_ being secretive about it. Aside from that one line, the entire article reads more like "this is new, I wonder what it does".
Lastly, when I googled "fbclid" the top 3 articles are completely unrelated to Facebook (but, then, I'm not in marketing so this doesn't surprise me) and the forth is this very article.
The first link (for me) when googling fbclid is the reddit post in r/analytics I linked to, which doesn't have a ton of info but gives more than what the author had. Though you're correct, it was posted after the author originally posted, and I can't fault him/her for not checking in again a few days later.
Now the interesting question will be whether "fbclid" can be tied to individuals. And I couldn't readily find this info in the links you posted. Maybe I'm bad at reading?
Could someone explain or give a reliable article that explains this well?
Theres no way for them to know whether or not the extra params on the URL change the result page. (i.e. example.com/index.php?post_id=1 and example.com/index.php?comment_id=1 could be very different pages, or they could be the same; you don't know).
So in comes the canonical url! This tells Google the proper url required for a specific page. That way if Google gets to a page using two different urls, it can tell that they are the same page.
You can list it by adding a tag to your HTML head.
You can even do face things like rewrite urls entirely (i.e. If the crawler hits example.com/?category_id=1&item_id=2, you can correct the ugly url by listing the canonical url as example.com/category/1/item/2)