(Why not just the switches for the DC(s) your VPC is in? Because GCLB IP addresses are anycast addresses, with BGP peers routing them to their nearest Google POP, at which point Google's own backhaul — that's the "premium-tier networking" — takes over delivering your packets to the correct DC. Doing this requires all of Google's POP edge switches to know that a given GCLB-netblock IP address is currently claimed by "a project in DC X", in order to forward the anycast packets there.)
To ensure consistency between deployed GCLB config versions across this huge distributed system — and to avoid that their switches constantly being interrupted by config changes — it would seem to me that at least one — but as many as four — of the following mechanisms then take place:
1. some distributed system — probably something Zookeeper-esque — keeps global GCLB state, receiving virtual GCLB resource updates at each node and consensus-ing with the nodes in other regions to arrive at a new consistent GCLB state. Reaching this new consensus state across a globally-distributed system takes time, and so introduces latency. (But probably very little, because the resources being referenced are all sharded to their own DCs, so the "consensus algorithm" can be one that never has to resolve conflicts, and instead just needs to ensure all nodes have heard all updates from all other nodes.)
2. Even after a consistent global GCLB state is reached, not every one of those new consistent global states get converted into a network-switch config file and pushed to all the POPs. Instead, some system takes a snapshot every X minutes of the latest consistent state of the global-GCLB-config-state system, and creates and publishes a network-switch config file for that snapshot state. This introduces variable latency. (A famous speedrunning analogy: you can do everything else to remediate your app problems as fast as you like, but your LB config update arrives at a bus stop, and must wait for the next "config snapshot" bus to come. If it just missed the previous bus, it will have to wait around longer for the next one.)
3. Even after the new network-switch config file is published, the switches might receive it, but only "tick over" into a new config file state on some schedule, potentially skipping some config-file states if they're received at a bad time. Or, alternately, the switches might themselves coordinate so that only when all switches have a given config file available, will any of them go ahead and "tick over" into that new config.
4. Finally, there is probably a "distributed latch" to ensure that all POPs have been updated with the config file that contains your updates, before the Google Cloud control plane will tell you that your update has been applied.
No matter which of these factors are at fault, it's a painfully long time. I've never seen a GKE GCLB Ingress resource take less than 7 minutes to acquire an IP address; sometimes, it takes as much as 17 minutes!
And while there's definitely some constant component to the time that this config rollout takes, there's also a huge variable component to it. At least one of #2, #3, or #4 must be happening; possibly multiple of them.
---
You might ask why load-balancer changes in AWS don't suffer from this same problem. AWS doesn't have nearly as complex a problem to solve, since AFAIK their ALBs don't give out anycast IPs, just regular unicast IPs that require the packets be delivered to the AWS DC over the public Internet. (Though, on the other hand, AWS CDN changes do take minutes to roll out — CloudFront at least distributed-version-latched for rollouts, and might be doing some of the other steps above as well.)
You might ask why routing changes in Cloudflare don't suffer from this same problem. I don't know! But I know that they don't give their tenants individual anycast IP addresses, instead assigning tenants to 2-to-3 of N anycast "hub" addresses they statically maintain; and then, rather than routing packets arriving at those addresses based purely on the IP, they have to do L4 (TLS SNI) or L7 (HTTP Host header) routing. Presumably, doing that demands "smart" switches; which can then be arbitrarily programmed to do dynamic stuff — like keeping routing rules in an in-memory read-through cache with TTLs, rather than depending on an external system to push new routing tables to them.