https://www.semanticscholar.org/paper/Program-Synthesis-from...
AlphaCode is not particularly good at it, either. In the arxiv preprint, besides the subjetive and pretty meaningless "evaluation" against human coders it's also tested on a formal program synthesis benchmark, the APPS dataset. The best performing AlphaCode variant reported in the arxiv preprint solves 25% of the "introductory" APPS tasks (the least challenging ones). All AlphaCode variants tested solve less than 10% of the "interview" and "competition" (intermediary and advanced) tasks. These more objective results are not reported in the article above, I think for obvious reasons (because they are extremely poor).
So it's not doing anything radically new and it's not doing it particularlly well either. Please be better informed before propagating hype.
Edit: really, from a technical point of view, AlphaCode is a brute-force, generate-and-test approach to program synthesis that was state-of-the-art 40 years ago. It's just a big generator that spams programs hoping it will hit a good one. I have no idea who came up with this. Oriol Vinyals is the last author and I've seen enough of that guy's work to know he knows better than bet on such a primitive, even backwards approach. I'm really shocked that this is DeepMind work.