I agree with you that this network probably has not found the source code or something like a minimal description in its weights.
Honestly, I'm writing a paper on model compression/complexity right now, so I may have co-opted the discussion to practice talking about these things...! Just a bit over-eager (,,>﹏<,,)
Have you given much thought to how we can encourage models to be more compressible? I'd love to be able to explicitly penalize the filesize during training, but in some usefully learnable way. Proxies like weight norm penalties have problems in the limit.