This sounded like a joke, but I wasn't 100% sure.
It's a real thing (code from Stack Overflow):
const context = canvas.getContext("webgl", {
failIfMajorPerformanceCaveat: true,
});
Pretty neat.* Woefully ambiguous and underspecified - what is "major"? A 12 FPS drop compared to desktop? Below 30 FPS? Frame jitter outside a certain standard deviation? It will depend on the app (fast-paced frame-perfect game vs. "well I just want the animation to be smooth" necessarily have different requirements for performance), but this flag leaves it up to the browser to decide.
* Oh, it's meant as a general term for checking whether software acceleration and frame readback (and other yet unknown possible bottlenecks) are engaged, those are the "performance caveats" - but those are either "yes" or "no", there's no way to quantify that with "major", totally meaningless. Unless it's conditionally choosing to succeed even with software rendering if some unspecified performance threshold is met. The 2 sentences of documentation about this flag means I wouldn't know.
* If you Google the name of the flag, near the bottom of the first page you'll find an email thread where someone brought up points like this and was ultimately shot down and the flag pushed through because "the Maps team really needs this." The technical equivalent of legislating from the bench. I wish I could create new browser features to solve all my inconveniences too. Why the Maps team couldn't figure out how to measure the canvas FPS and degrade on their own is a mystery to me.
* To round it all out, the name is also really bad. "failIfMajorPerformanceCaveat" is a weird negative tense - why not "requireUnhinderedPerformance"? That's consistent with the universal standard of "require" (not "failIf", how goofy) and is a lot clearer about the supposed intention of the flag (to try to ensure performance similar to native GL rather than this ambiguous "performance caveats" thing).
Sorry, just a rant about shovelware in the browser stack.
I hate the name too, but the flag was a necessary compromise to get past an impasse.
The problem at the time was that fullscreen WebGL in Firefox was an absolutely terrible experience on some machines, because they did a readback of the full framebuffer from the GPU to the CPU every frame. This didn't just make that tab slow, or even just the whole browser. It actually bogged down the entire computer. It made the mouse pointer skip. It was beyond unacceptable.
My preferred solution was that Firefox should disable WebGL on these machines. But the Firefox team did not want to do that, because it sort of worked OK as long as the canvas was only a small fraction of the screen. They also didn't want to expose any way of specifically detecting or disallowing readbacks. But no fullscreen WebGL application could be launched to a wide audience when a significant fraction of users would have their entire computer bog down and mouse pointer start skipping just loading the page.
So why couldn't you "just" detect that the framerate was low and turn off WebGL yourself? The problem with that is the first few frames at page load time are often slow for various reasons, some GPU related and some not. You have to render frames for a couple of seconds before you can reliably measure the steady state performance, and during that time the user's mouse pointer is skipping and they're already having a really bad time. If you then decide to disable WebGL, all that loading time and bandwidth was wasted and you have to start again loading your fallback content. So yeah you can measure performance but it doesn't solve the problem. Users still have a bad time.
As for the name, it was suggested (perhaps jokingly) by a Mozilla engineer and I decided to go with it for the proposal because they were the ones that needed convincing, and I figured they were more likely to accept a name that they suggested themselves. Bikeshedding the name could easily have derailed the whole thing.
I still wish that Firefox had just disabled WebGL on those machines, or at least agreed to provide an explicit way to detect and/or disallow readbacks. Then we wouldn't have needed the vaguely specified compromise solution.
As a firefox user I appreciate that we were considered. Although maybe I've got a new bone to pick with Mozilla now, since not exposing hardware/software rendering mode or readback y/n is a weird hill to die on when things like canvas font rendering fingerprinting are much more reliable and precise (if their refusal was on privacy grounds).
With the history in mind, an ideal flag to me would have been more like an enum of "minimumRequiredGLExperience" valued like ["any", <benchmark value>, "parity"] where apps have a middle ground to say "even if we have SW rendering, maybe we'll still want GL if the browser ranks itself above the threshold value (which would be computed ahead of time and have a standard-ish definition, kind of like the scores on userbenchmark). While still somewhat nebulous, it would allow fine-tuning the experience for known target platforms.
I definitely understand the limited window of opportunity though, and rolling an entire feature like that above would be a big project so I can't fault anyone for going with the compromise. I just wish the name had been different or the documentation more clear - even the Khronos docs are wishy washy on whether SW rendering will for sure cause a failure, or if there's some implementation defined benchmark already that it boils down to. Which I realize is also something that would require work from the Mozilla side too, that they may not be willing to document for the same logic as not wanting to expose whether readback is happening in the first place!
Anyway, thanks again for responding, and I apologise for being a little abrasive in my first post.