Yeah, I somehow glossed over the whole container thing.
The container thing might be horizontal scaling thing where 1 container runs on 1 instance with 1 vCPU, running multiple processes on instances means you need beefier slices of compute to take advantage of the parallelism, and you can't cleanly scale up and then down using only the resources you need.
If you have a queue distributing work, that model makes sense with single-threaded interpreters where consumers instances are spun up and down as needed, versus pushing work to a thread pool, or multiple instances with their own thread pools, that aren't inhibited by the GIL. The latter could be more efficient depending on the work.