story
> The CGI model may still work fine, but it is an outdated execution model
The CGI model of one process per request is excellent for modern hardware and really should not be scoffed at anymore IMO.
It can both utilize big machines, scale to zero, is almost leak-proof as the OS cleans up all used memory and file descriptors, is language-independent, dead simple to understand, allows for finer granularity resource control (max mem, file descriptor count, chroot) than threads, ...
How is this execution model "outdated"?