You are confusing the two phases. The MuZero training does not use MCTS, it merely observes sequences of moves/states/rewards. This can be done using observations from anywhere: human games, AG games, A0 games, random games. This is where it does the actual learning of what moves are valid and what makes moves good (because invalid moves will not be represented in the dataset of valid games). It does not need MCTS or any access to an oracle about move validity, which is Marcus's complaint. This is no more cheating than observing the real world to infer its physics.
The second phase, where new games are generated, may use MCTS. But it doesn't have to. So it can learn by simply training on a game corpus, and then generating a new game corpus by self-play using only its internal implicit tree search and something like illegal moves = instant loss. It will rapidly learn to not make illegal moves and play just as validly as a MCTS-structured tree search, and then its implicit learned tree search achieves the same or greater playing strength.