This is from Dan Walsh who is quite famous in the SELinux community [1]. See the part "Why didn't SELinux block it?"
Sure you can configure SELinux with a stricter policy, but virtually nobody does that in practice, they use the defaults, mostly because SELinux policies are really only used to fix stuff SELinux default policies break, usually using the audit2allow or whatever its called.
> I sense you might not be interested in this since you've said you just want benchmarks
If the EDR/antivirus industry can have various test suites and testing organisations, the same methodologies can apply to OS security subsystems. Let's see what happens with your default RH/SELinux, Ubuntu/Debian & AppArmor, and for shits and giggles OpenBSD, and see how they fare against exploits and vulnerabilities harvested over the years, and make reproducible labs. That's what I'm asking; I'm not going to scour security advisories to do tit for tat comparisons, I would rather see a well thought approach to this.
We do this kind of comprehensive benchmarking with all sorts of software such as compression, cryptographic libraries, compilers / languages, etc. Traditionally this would've been harder in the days when virtualisation and utility computing was nascent, but the infrastructure part is pretty achievable these days. Just need to someone to expend the effort (and I'm not volunteering).