That's roughly what we do, though we run an hosted version of an open source webapp, not a CDN. It's more expensive resource-wise (particularly RAM), but it has meant that we were immune to 90%+ of the security bugs discovered in the platform.
Why would the costs be outlandish? We offer that and we're fairly cheap. Since the cost is mostly fixed per customer, it should scale linearly.
As for scaling, they already have to do that, by pointing different requests at different servers depending on their load, etc.
If there are 1M customers that's a minimum of 1M servers. Some customers are obviously larger and would need more. There's also HA. Let's conservatively call it 2.5M servers.
At an absolute bare minimum we'd need to allocate 2.5M GB of ram and 2.5M vCPU. That's a huge amount of resources.
If you could reliably fit 10000 Small customers on a single server at 32gb ram and 8 cpus you can already start to see how many resources can be saved.
Without customer isolation you've got the entire cluster to handle load spikes and HA. With isolation you have to have scheduling monitoring each of the 1M clusters and scaling appropriately by anticipating demand.
Scaling a service is way easier than scaling customers within a service or many services.
Each _process_ has its own memory page table. Containers are built out of processes, so they inherit this attribute.
Namespaces have nothing to do with it.
That just punts the vulnerable code elsewhere. A kernel bug could leak memory across processes. And the kernel is also written in C, so you aren't getting protection from a "better" language either.
I disagree. I think it's much less likely, because the kernel doesn't usually get involved in the process' memory once it is allocated.
Sure, it maps the virtual memory about to physical memory as needed, but bugs there is likely to cause severe corruption resulting in an immediate crash. The kernel doesn't go low level enough for it to be likely to result in messing with a process such that the single process continues to work but also leaks across the process-internal customer boundary that the kernel cannot see. That would require a level of surgical precision I don't think is likely in a bug.
To be clear, I'm not saying that you shouldn't use per-customer processes. The kernel has more eyes and is less likely to be vulnerable in this way. Just that from an analytical perspective, you are really just moving the problem elsewhere, rather than solving it.