We are in the yellow, but the biggest culprits for blocking time are...Google Tag Manager, GAds (and Ganalytics where we still have it). So yeah, thanks Google, can't wait to lose on SEO due to your own products. And also, thanks for releasing this without the proper analysis tooling. (https://web.dev/debug-performance-in-the-field/#inp : this is not tooling, this is undue burden on developers. Making people bundle an extra ["light" library](https://www.npmjs.com/package/web-vitals) with their clients, forcing them to build their own analytics servers to understand what web-vitals complains about...or is often wrong about)
My company is scrambling to account for the changes here, and ultimately it will be users who suffer until we have the proper data available.
This should’ve been a standard that all major browsers have implemented and agreed to, before being rolled out generally.
This has been the case for over a decade with Google's "Lighthouse" analysis tool as well. I used to use it as part of a site analysis suite for my clients - a good portion of the time, my smaller clients would end up deciding to replace Google Analytics entirely with a different product because of it.
> ...a good portion of the time, my smaller clients would end up deciding to replace Google Analytics entirely with a different product because of it.
This seems like a good outcome, then? Market pressure may be the only way to get Google analytics to finally cut their footprint.
I don't think the generative AI results they're going to do will be much better either.
As long as they avoid the pattern of adding a global loading spinner that covers the whole screen. That’s just the worst possible loading screen. I suppose it would still pass this metric.
Also I’m not sure if I totally understand the metric - I think it’s simply when the next frame is rendered post interaction, which should easily be under 200ms unless you’re
1. doing some insane amount of client side computation
2. talking over the network far away from your service or your API call is slow / massive
and both of these are mitigated by having any loading indication so I don’t understand how this metric will be difficult to fix.
It also seems to be a metric that is very easily gamed.
If all that matters is instant feedback, then just draw that loader as soon as user clicks add to cart, do not wait for the request to start. It does not matter that it will take X or Y seconds.
Fun fact: the current JS-specific metric (which is being fazed out) is First Input Delay, and it was explicitly designed to avoid this gaming:
> FID only measures the "delay" in event processing. It does not measure the event processing time itself nor the time it takes the browser to update the UI after running event handlers. While this time does affect the user experience, including it as part of FID would incentivize developers to respond to events asynchronously—which would improve the metric but likely make the experience worse. > - https://web.dev/fid/
I wonder why they decided to reconsider this trade-off when designing INP.
Of course you should be setting a visual "in progress" state before you send out a request. And yes that's supposed to be instantaneous, not measured in "X or Y seconds". That's the entire point, to acknowledge that the user did something so they know they clicked in the right place, that another app hadn't stolen keyboard focus, etc.
But even in that case, instant feedback is probably better for the user. It lets them know the website isn’t broken and they don’t need to click again, and it also makes the experience feel snappier.
A site that genuinely responds quickly is best. But for a slow site, I'd always prefer one that at least gives me instant feedback that I clicked something over one that doesn't.
Have you used doordash.com? I don't know how they do it, but they manage to exceed 200ms on every single click, easily. And they're not alone.
I don't think I implied this at all actually.
Example: NYTimes.com on Mobile Safari with AdGuard. 18 seconds.
Google is being really disingenuous with its so called metrics. A stroke of the pen could make INP 200ms across the top 500 sites.
Dear lord, I can't imagine that's the fault of NYTimes. Something is off with your setup.
NYTimes.com is super quick and responsive on my devices.
Do you observe the same behaviour in private mode? Something goes wrong on your device.
I’m not going to expect a 16ms response or anything for every animation but much slower & you see jank.
For page interactivity though? 0.2s is pretty damn fast. Human response time is 0.15-0.25s
So it’s pretty reasonable
The most recent example I've observed this on was a website with a heavy interactive location finder experience that lived on a single page. Fine, penalize that page. There's a chance users won't initially navigate there anyway. However, because a (very minimal, practically irrelevant amount of) similar content on the rest of the page was present on 18 other pages, the impact was huge.
The reality of the web today makes this pretty dire in my mind. Many businesses choose to run websites that are generally fast, but they have to engage with third-party services because they don't have the means to build their own map, event scheduler, or form experience. The punishment doesn't fit the crime.
[0]: https://www.searchenginejournal.com/grouped-core-web-vitals-...
I am going to have to disagree. Final HTML from the server is just that. Its final. The client displays it and its done. No XHR, no web sockets, no JS eval. It's done. You can immediately use the webpage and the webserver doesn't care who you are anymore. With SPA, this is the best case. You maybe even start with SSR from the server and try to incrementally move from there. Regardless, the added complexity of SSR->SPA and other various hybrid schemes can quickly eat into your technical bullshit budget and before you know it that ancient forms app feels like lightning compared to the mess you proposed.
Reaching for SPA because you think this will make the site "faster" is pretty hilarious to me. I've never once seen a non-trivial (i.e. requires server-side state throughout) SPA that felt better to me than SSR.
What about gmail? That has all the state server side. How impressive would it be if all rendering was done server side?
Google's one (of many) heads has no idea what another (of many) heads says or does.
My point still stands.
"The First Contentful Paint (FCP) metric measures the time from when the page starts loading to when any part of the page's content is rendered on the screen." [1]
Unless your server is overwhelmed, and can't send back data fast enough, there's literally no way to call "1 second before anything is rendered on screen is fast".
In the context of the tweet this is even more egregious. They were talking about Reddit's yet another redesign, and how it was fast. Reddit is a website that displays text and images. Their server responds in 200 milliseconds max. And yet, they were talking about how spending 0.9 seconds to display some info (menu on the left?), and 2.4 seconds to display actiual content is fast.
And that comes from "engineering leader at Chrome". We are at a point in time where people literally don't understand what fast is.
> Chrome usage data shows that 90% of a user's time on a page is spent after it loads
Clearly impressive, breakthrough, research going on at Google.
With their engineering leader [1] arguing that 2.4s to display text and images is fast, no wonder they present "people still spend time on websites after they have spent an eternity loading" as a surprising find.
[1] https://twitter.com/addyosmani/status/1678117107597471745?s=...
...more time than it takes to load. much, much more time.