Rendering is still done in bulk for the changed areas, avoiding rendering expensive elements (e.g., transformed video buffers, deeply layered effects, expensive shaders). It's a fundamental part of most UI frameworks.
The Guassian blur and lensing effects would still slow things down by needing to fetch pixels from the render target to compute the fragment, vs painting opaque pixels.
And yeah, having a render step depend on the output of a previous non-trivial render step is Bad™.
Software renderers typically do the optimisation you're suggesting to reduce on memory and CPU consumption, and this was a bigger deal back in the day when they were the only option. I think some VNC-like protocols benefit from this kind of lazy rendering, but the actual VNC protocol just diffs the entire frame.
On the GPU, the penalty for uploading textures via the bus negate the benefit, and the memory and processing burden is minimal relative to AAA games which are pushing trillions of pixel computations, and using GBs of compressed textures. GPUs are built more like signal processors and have quite large bus sizes, with memory arranged to make adjacent pixels more local to each other. Their very nature makes the kinds of graphics demands of a 2D GUI very negligible.
Each window renders to one or more buffers that they submit to the display server, which will then be either software or hardware composited ("software" here referring to using the GPU to render a single output buffers vs. having the GPU scanout hardware stitch the final image together from all the source buffers directly).
Note that in the iPhone cases, the glass blur is mostly an internal widget rendered by the app, what is emitted to the display server/hardware is opaque.
> It might be that windows underneath have gone to sleep and aren't updating,
The problem with blur is when content underneath does update, it requires the blur to also update, and rendering of it cannot start until the content underneath completed rendering.
> Software renderers typically do the optimisation you're suggesting to reduce on memory and CPU consumption,
I am solely speaking about GPU-accelerated rendering, where this optimization is critical for power efficiency. It's also required to propagate all the way down to the actual scanout hardware.
It also applies to CPU rendering (and gpu-accelerated rendering still CPU renders many assets), but that's not what we're talking about here.
> I think some VNC-like protocols benefit from this kind of lazy rendering, but the actual VNC protocol just diffs the entire frame.
Most modern, non-VNC remote desktop protocols use h264 video encoding. Damage is still propagated all the way through so that the client knows which areas changed.
The frames are not "diffed" except by the h264 encoder on the server side, which may or may not be using damage as input. The client has priority for optimization here.
> Their very nature makes the kinds of graphics demands of a 2D GUI very negligible.
An iPhone 16 Pro Max at 120 fps is sending 13.5 Gb/s to the display, and the internal memory requirements are much higher. This is expensive.
Not rendering a texture and being able to pass it off to scanout hardware so that the render units can stay off is the difference between a laptop giving you a ~5 hour battery life and a 15-20+ hour battery life.
The GPU could texture your socks off, but you're paying a tax every microsecond your GPU's render units are active, which matter when you're battery powered or thermally constrained. This is why display servers and GUI toolkits go through lengths to not render anything.