I mean… a 3 layer network is a Universal approximator… and you can very much do network distillation… it’s just that getting them wide enough to learn whatever we want them to isn’t computationally efficient. You end up with much larger matmuls which let’s say for simplicity exhibit cubic scaling in the dim. In contrast, you can stack layers and that comes much more computationally friendly because your matmuls are smaller.
Of course you then need to compensate with residuals, initialisation, normalisation, and all that, but it’s a small price to pay for scaling much much better with compute.