In the OP article Norvig suggests 10^172 possible games and games average 200 moves.
Walraet paper has 10^10^103, so 1000 googolplexes - this is very large.
Monte Carlo methods are specifically for guess-timating intractably big searches.
MCTS is close to the human heuristic for evaluating Go positions - counting who currently controls what, multiplied by the strength of each position (in Go 2 'eyes' make a block impervious to harm). Early on this is very subtle to discern - which makes Go a subtle art.
The human heuristic is complex, the MCTS random playout is overly simple but uses gigaflops of random sampling. MCTS was the basis of the best computer (the new innovation is the convnets that prunes the tree before MCTS).
The convnets are trained on a big database of expert games which is why I wonder if the MCTS is the differenting factor & Lee Sedol with an MCTS would beat the Convnet & MCTS.
This is important because: does AlphaGo plan ?
If not, the implication is planning is a poorer heuristic compared with whatever AlphaGo actually does.
The convnet provably doesn't plan, it estimates what a human expert would do and performs tne same whether playing game or just predicting next move from random configurations.
AI research has always held planning in high regard.