In particular it breaks software that does I/O scheduling in user space, which is idiomatic and explicitly supported by the Linux kernel, even on virtual machines, but this use case conflicts with the container abstraction so runtimes offer an ersatz version that allows the code to run albeit poorly.
What syscalls do you think are intercepted, how? Speaking as someone who can write kernel code, I'm not aware of any such thing specific to containers. (As far as the linux kernel is concerned, there's no such thing as a container.)
If you're talking about BPF, that can be used outside of containers, e.g. systemd can limit any unit, and using it is not part of a definition of what a container is.
You can find many examples in the wild of reputable software that loses significant performance once containerized no matter how configured. Literally no one has demonstrated state-of-the-art data infrastructure software that works around this phenomenon, and at this point you’d think someone would be able to if it was trivially possible. I test database kernels in a diverse set of environments and currently popular containers aren’t remotely competitive with VMs, never mind bare metal. The reasons for the performance loss are actually pretty well understood at a technical level, albeit esoteric.
Every popular container system has runtimes that intercept syscalls. Whether or not Linux requires containers to intercept syscalls is immaterial because in practice they all do in a manner destructive to I/O performance.
There used to be a similar phenomenon with virtual machines for many years, such that no one deployed databases on them. Then clever people learned how to trick the VM into letting them punch a hole through the hypervisor, and we’ve been using that trick ever since. It isn’t as fast as bare metal, but it is usually within 10%. No such trick exists for containers and as a consequence performance in containers is quite poor.