Pixel shaders in WebGPU / wgpu are written in WGSL. The above 2-dimensional for-loop is _NOT_ a proper pixel shader (but it is written in a "Pixel Shader style", very familiar to any GPU programmer).
Upon closer inspection, the glyphs are each rendered onto the framebuffer sequentially... one-at-a-time. IE: NOT in an embarrassingly parallel manner. So the joke is starting to fall apart as you look closely.
But those kinds of details don't matter. The post is written well enough to be a good joke but no "better" than needed. (EDIT: It was written well enough to trick me in my first review of the article. But on 2nd and 3rd inspection, I'm noticing the problems, and its all in good fun to see the post degenerate into obvious satire by the end).
Honestly it sounds like AI. This is a website in the shape/memory of a blogpost, not an actual blogpost.
"...An easy tutorial in Rust"
A short visit to the authors blog clearly shows they know what they talk about.
It’s technically possible to translate the code into compute shader/CUDA/OpenCL/etc., but that gonna be slow and hard to do, due to concurrency issues. You can’t just load/blend/store without a guarantee other threads won’t try to concurrently modify the same output pixel.
For immediate mode renderers (IE desktop cards), VK_EXT_fragment_shader_interlock seems available to correct those "concurrency" issues. DX12 ROVs seem to expose similar abilities. Though performance may be hit more than tiling architectures.
So you can certainly read-modify-write framebuffer values in pixel shaders using current hardware, which is what is needed for a fully shader-driven blending step.