One may still choose to discount their explanations because they may be biased sources but I still think everyone should try to understand what they're saying. Hopefully, being familiar with the technical details will elevate the discussion so people who disagree can point out specific and concrete technical flaws of those explanations rather than just restating a generalized version of "Google is trying to take over the whole web."
[1] previous threads:
https://news.ycombinator.com/item?id=24275752
Asserting that an elevated discussion should center only on technical flaws and disagreements is a myopic way to look at a topic.
There’s more to the web than the technology used to power it. How a technology is used, and what it enables (good or bad) is an appropriate topic for this forum and constitutes elevated discussion.
You misintepreted what I wrote. My comment did not restrict it to only technical flaws. I already agree with your following statement:
>How a technology is used, and what it enables (good or bad) is an appropriate topic for this forum and constitutes elevated discussion.
Yes. Having us share some armchair anthropology (which I do myself[1]) on the social or secondary effects of technology is constructive dialogue and elevated discussion. My comment was never trying to cut that off.
That said, just rehashing "Google is just trying to own the web!" or slight variations of that meme may feel good for the poster to type out but it does not educate me on this topic. This is especially degrading to the discussion if the poster restating that common sentiment has a mistaken mental model of what Web Bundles actually do or can't do. Instead, share some quality facts so I as a reader can come to the conclusion on my own that this technology forces unblockable ads and lets Google take over my web experience.
I think folks interested in this topic should get a basic education.
I resend starting with intents & desires the project started with, by reading the ietf draft of the use cases,
https://tools.ietf.org/html/draft-yasskin-wpack-use-cases-01
There seems to be a strong urge in Google to cut the connection between then endpoints of the web and become the central authority. Make all traffic flow through their machines. Let no information arrive at the endpoints.
Right now, requests on the web are kind of p2p. A user requests a website, the publisher serves it any way they see fit. Directly via their servers or via a CDN of their choice.
Google seems to have a strong focus on ending this. Turning the web into Googlebook / AOLoogle.
I wonder why. Do they see their business model threatened on the open web? Or do they see a chance to increase their profit with a closed web?
It is already be possible today to put ads into a page directly.
But the benefit for Google would be that if they deliver the bundle, they would know that their ads are in there. Heck, they would know everything that is in there. So they would have full information about all ads and everything that is taking place on this new "web".
Of course, they could also use the opportunity to hinder ad blockers further. For example by not allowing plugins to get between reading the bundle and rendering it. They already weakened plugins a lot over the recent years.
As with all other power grabs, the ability to resist it is simply a function of how organised the resistance is.
The apathy shown here directly counteracts any urge to resist.
But what absolutely electrifies me is that I can share content with other people: even in an offline scenario I can give then a webbundle with a site if the site supports it, and the friend's browser can crytographically check everything out, & trust that the bundle is from the bundler.
> Right now, requests on the web are kind of p2p.
Today's web is decentralized, because there are many domains. But there is little peering among peers: everything is client-server.
This, imo, enables a much more p2p web. It enables a distributed web. Where even if an endpoint is under attack, the web can go on. Where folks who fall over the edge (go offline) can still operate. But yes, seems likely Google intends to be a rather large peer among this newly distributed web.
I recommend the IETF draft of use cases for getting a taste of what WebBundles is for, which hints at this new distributed architecture, by way of describing characteristics a WebBundled web has,
https://wicg.github.io/webpackage/draft-yasskin-wpack-use-ca...
Google is evil, and if we need to wrestle about that, I will. I'd like to see your red-team skepticism about their intentions and your attempt to consider how this may be a trojan horse or a false-compromise. Google is famous for making moves that look neutral or even good from many angles that are ultimately centralizing power in the hands of capitalists. With good reason, we should doubt why they are doing this. It does appear that the core intuition (if I understand correctly) in WebBundles //can// be used to improve decentralization of information power, but I suggest we should paranoically imagine how it may be exploitable by Google (that is our duty here).
I have some limited experience and a ton of skin in the game on this one. For several years, my wiki has had some of the properties of a prototype of a WebBundle, including an attempt at enabling cryptographic verification (https://philosopher.life/#Cryptographic%20Verification). My goal is to emit one huge all-inclusive html file with the signature wrapped around it (I sign and push/sync up to every minute). This enables me to distribute my wiki across many networks, even sneakernets, without losing one of the fundamental keys to my voice. I'm a second-class citizen on the internet compared to a large corporation, and I have to be able to effortlessly abandon or accept the losses of rented end-points (I really don't own my domain, access-point, or server...they are merely rented: I do own my private key though). In some sense, I have the opportunity to agnostically treat the methods of distribution as a lame middlemen pipeline (what we always hoped the internet infrastructure would really be). I give up my ability to control how my wiki is distributed in some sense as I enable anyone to pass around the signed wiki as a proxy. I happily lose the ability to check whether or not I want to send my signed wiki to any individual in many cases, and I lack interactive control of a session; it feels like I become a far more passive participant of the web, being incentivized to provide the read-only information valuable to ML and disincentived from relying upon dynamic real-time exchanges. I appreciate being able to prevent people from putting words in my mouth while also enabling users of my wiki to acquire and run the site offline, as they see fit, with maximum privacy and anonymity.
There's the context I have. From what I can tell, from a grassroots p2p practice, the reason that the signature "works" is because a user has maintained an old copy of the wiki or even just the public key that they do trust. They've chosen by hand to trust it's me that signed it. I'm not convinced that Google intends to maximize the automation and decentralization value of that kind of verification. It seems an incidental possibility at best (perhaps there's their quasi plausible deniability in seeking a monopoly).
They aim to be more than merely a very large peer, and I'm begging you to question that more openly with me. This feels like a disruptive feint only seeking decentrality in name. Perhaps their move weakens the powers of many web infrastructures that would otherwise continue to centralize, but I think they will continue to attempt to take over whatever power vacuums arise in that space (I assume they can see how to make money off this far better than I can too). When I see, for example, Dat become a first-class citizen of Chrome and when I see them empower client-side archiving, search, and moderation to users of their infrastructure (while taking Firefox and web standards off the leash), I'll begin to believe they intend to enable a p2p web. For now, I see them building an AMPed blackhole walled-garden where they aim to be the root server of trust and authority on what is salient while allowing the highest paying bidders to have degrees of access or control over our data, minds, and lives.
I'll try and give my perspective as someone who has spent a couple hundred thousand dollars on Google Adwords and also gets a lot of organic traffic from them, and also does a lot of work on Apple Apps and Android Apps
*
you wrote:
TekMol 22 hours ago [–]
I think their idea is to combine that with signing the bundles, so a page from www.someserver.com can be served by anyone, aka Google. I guess this would mean Google can serve all content on the web.
There seems to be a strong urge in Google to cut the connection between then endpoints of the web and become the central authority. Make all traffic flow through their machines. Let no information arrive at the endpoints.
Right now, requests on the web are kind of p2p. A user requests a website, the publisher serves it any way they see fit. Directly via their servers or via a CDN of their choice.
Google seems to have a strong focus on ending this. Turning the web into Googlebook / AOLoogle.
I wonder why. Do they see their business model threatened on the open web? Or do they see a chance to increase their profit with a closed web?
@#$#$
@#$#$
OK, so think of Google as
THE STARTING POINT that everyone uses for the Internet
There are 3 critical things required
A) Trust
B) Efficiency
C) No other starting points
Now, Google's problem is that it knows (like all technology companies) that things change very fast in technology
look at Facebook having to buy Instagram and then WhatsApp and then having to GovernmentAttack TikTok and not being able to buy Snap chat
Google, on the other hand, has a very serious issue
A) Its main 'starting point competitors' are not 'buyable' or 'governmentAttackable'
It is Amazon for starting point for shopping, Facebook for starting point for 'people who think the Internet is Facebook', and then new competitors like completely different search methodologies and vertical search engines that are not even 'search engines' but take away Google position
B) It has been following a policy of 'shift everything to google properties'
This creates worsening search results
this leads to a loss of trust and efficiency
Efficiency is really hampered because now a typical search engine user is spending 60% of their time avoiding 2nd quality Google products, to find the remaining 40% and then sort through those to find THE BEST OPTION
C) See, the thing is, that to shift everyone on to Google properties, Google is not just throwing lots of Google results in search, it is also hiding the BEST of BREED services or plain stealing their data (like Yelp and Genius)
D) Trust is further eroded with so much spying and anti privacy
*
So Google is in this very unique situation where it has to do EXTREME measures
such as
try and shift everyone to AMP
try and shift everyone to Web Bundles
try and shift everyone to No Tracking Allowed except by Google
Think of someone who had the biggest trade port between two continents. And they make a TON of money
Then other ports started showing up
So what is their option?
buy up all the ports? what if that is not possible? What if peeping tom Facebook is not willing to sell their port?
then Google knows that sooner or later its port will become one of many and goodbye profits. Then they start pretending - only safe way to cross the ocean is on our ships. So EVERYONE can cross only on our ships
Very similar to FB being scared and starting Internet.org. Best way to eliminate competitors - control the ENTIRE internet and you choose who can be shown
by the way Tesla with StarLink and Amazon with Kuiper are also in position to do this (and not sure Tesla but Amazon definitely would)
turn the Internet into a Pay to Play zoo
*
There are lots of other signs too
1) Google click quality is down
2) amount of click fraud is going up
we see clicks coming from Google Servers. on customer service they admit a certain percentage are fake whenever we see fake clicks, they will still charge us and then (on their own) do a token refund
So we see $100 of fake clicks. 12 hours later there is a token $5 refund for fake clicks (they use some other term, will have to check what)
2) amount of organic traffic you get depends on who much you spend
3) If you spend less, then they start showing negative results in organic search to affect sales from people coming to you anyways
Google has already crossed the inflection point. Unless they can magically buy FB and/or Amazon they are basically dead
Just to elaborate on that
They are squeezing every little bit out, even using lots of wrong methods to do that, becoming less and less value
MANY verticals people have switched COMPLETELY to FB and other advertising
Google is still good for many, many areas. However, they are so saturated and so inefficient at giving you bang for the buck, it's crazy
Meanwhile, FB will let you do anything you want to FB users, provided you pay them enough
So for advertisers who don't mind such a set up, FB is 10 times better
* A lot of Google advertising money is INERTIA
It's unfortunate that TikTok is getting ticktocked. Otherwise it would have eaten massively into Google's earnings
Google also has very high costs to remaing 'default' in the web browsers
They're paying Apple $10 billion a year to be default search
Apple should give them a fitting gift for them stealing iPhone ideas and design for Android and build its own search engine. Google market cap would halve within a year if Apple did that
Google wants all the click data and the click through navigation data about users (by way of passive logs) so they can sell more ads.
There are no other real world problems that web bundles solve.
Web bundles changes this relationship so that anyone can cache sites if it benefits them to do so. If you share a link on Twitter or Facebook or Discord or Slack they can cache the page on their servers and deliver it through the connection you already have open to them.
Web Bundles also open the door for network-local caches that don’t require MitM or trusting the cache.
Do they even follow any of their original advice or Google basically keep doing over engineered stuff fixed by adding another set of over engineered staff?
Let's talk gmail. I just refreshed the window and it did close to 400 request, ~8MB download which translates to nearly 40MB resource. And it keeps making more requests even when I'm not doing anything.
And a refresh of Google.com the search page did 33 request and nearly a MB download.
And they are preaching the world about optimizing the web?
The transformation has been amazing. Google properties, to the extent that they used to have a unified aesthetic, used to be quite lean and mean. How anyone can look at a product like Gmail and say that this is coming from a standard bearer on efficient use of resources is beyond me.
I mean, kudos to Google for (probably) not cheating here, but that's a low score.
And improving user experience for billions of users is not a negligible advantage either.
Maybe it is a sign that Google is ready to be taken over by a less bloated company. It is, after all, how Google came to power, by being efficient and to the point. Just look at to original Google home page compared to its competitors.
Same problem with AMP. Instead of asking news sites to fix their slow pages, it forced them through AMP by promising better result ranking.
Ask them to make their sites faster within a month or say they'll get booted off search. You'll be surprised at how fast they comply
My issue with web bundles is that it's yet another pile of complexity with very little incremental value over things that already exist. A poor tradeoff.
There's a substantial on-going cost to each web standard added so each one needs to "pay" for itself with broad or deep usefulness. Web bundles are just another way to skin a cat.
Also, I think web bundle can be a Electron replacement too for some use cases, so that some totally offline JavaScript webapp don't have to use Electron.
I can imagine this being a problem when news stories turn out to be false alarm and Google happily keeps serving the original content instead of the corrected content.
There's also a risk of vulnerability here, as getting a signed package might very well be used to host phishing pages on web caches.
The massive control Chrome and Android gives them means they can do whatever they want already, but at least with a private platform they won’t have to fight people and deal with the negative PR of doing evil stuff. And then the rest of us who like privacy and competition and ad blockers can use the “legacy” web.
https://wicg.github.io/webpackage/draft-yasskin-wpack-use-ca...
What we have here is a budding conspiracy theory, not even a theory, just gesticulation. Consensual Delusion, a belief that we are persecuted by secret forces that must be held off, held at bay.
This started months ago with an incoherent rambling ticket by the Brave author that is being cited. He spent months going back & forth with wild accusations & unspecified concerns. After dozens and dozens of exchanges, he finally named one single scenario, that people might "hide" their tracking malware by renaming files as they put them into the bundle.
Color me extremely unimpressed & unscared. Enormous sound & fury, for a capability that is in no way different from the web we already have today. It's not hard to setup a.webserver to randomize asset names. Nothing about webbundles is new or changes that.
Consensual Delusions like this hacked up hoax of a story threaten reality as we know it. As the old civic videos say: DONT BE A SUCKER. Anyone selling fear, uncertainty, & doubt is to be met with skepticism. Increasingly, FUD is how Apple/Mozilla/Brave are selling their anti-feature policy. "Trust us, we won't let the web work with midi" doesn't sound that great, but is much more honest than what we get, which is "these engineers & standards groups working on these specs are secretly trying to undermine this treasured web which we must protect & keep as is at all costs". the involved engineer's histories indicates they obviously care enormously about bettering the web, & in this case are combatting sizable transpiling tool bloat for devs, & enabling offline sharing & offline capable web, and literally fighting censorship, which are truly worthy goals all that will vastly help the web.
This is all super hard to work through. Yes, google used the web to reap enormous profit by means of enormous information control & inventory systems for ads & eyeballs. But Google also would not exist without the web, & historically the web was a small toy that couldn't do much compared to apps. The tables have turned, & the web is clearly ascendant, much safer, & increasingly we understand that the limitations of ux were largely from lack of will to explore & test what limits there really were, so the situation is no longer so obviously tense. But Google Chrome & Chromium & the spec work Google does are, imo, designed to improve a communal shared resource for all humanity, designed to greaten the web, not subvert it. We can see that here, as the engineers working on webbundle have shown a thousand times over their commitment to honest above board clear integrity as they have tried & tried & tried to work with Peter Snyder as he fumbled & plodded his way to a scenario where WebBundles pose any real danger, & Peter has imo failed at presenting anything. We can see the engineers take Peter seriously, try to work with him. And so I feel it is in general. It is intimating as hell that the web is so big, has so many capabilities, that so much keeps getting added, and so much of that comes from gigantic unimaginably huge pools of capital derived from eyeballs-on-screen. But somehow it has been working out, the engineers have genuinely cared about doing the right thing, & usually the standards bodies & TAG can eventually come to harmony & agree, & the web improves.
Peters dissent thread:
https://github.com/WICG/webpackage/issues/551
Personally I greatly look forward to WebBundles. It will radically improve the JS module situation, yay, a thousand times yay, & giving people the ability to share content directly with one another, without relying on centralized infrastructure, is one of the most genuine pure & true new expanses for the web & one I am greatly looking forward to.
[1] https://news.ycombinator.com/item?id=17942252
[2] https://news.ycombinator.com/item?id=16164549
[3] https://news.ycombinator.com/item?id=23221264
The list can go on
Anyway, users always lose. Another coincidence and in this case they will not lose, right? :-)
That's quite a statement to make. In no way do "conspiracy theories almost always come true."
Inside a HTML file, we introduce an attribute for embedded resources called cache=”identifier”. Script tags, style tags will have this attribute defined. There would also need to be an embedded image introduced. Inline all your resources. The browser will fetch the HTML and add whatever has the cache=”identifier” to its cache.
Then when the browser fetches a page, it will send a Cache-Got header, this is a bloom filter serialized of identifiers cached.
The server will check the bloomfilter to see if an item needs to be sent to the client and exclude the contents of those embedded resources with an empty script tag or empty style tag.
EDIT: Why is this being downvoted?