That depends on implementation. The cache could be maintained in shared memory, an external process or disk files and in all cases can provide a significant speed up over reading in and compiling hundreds of files without burdening all workers with all compiled opcodes.
The current implementation uses shared memory.
In addition, that does not mean retaining all the variables and data, only the compiled opcodes and that's the bulk of the startup time.