That is a staggering amount of possible Go games, no wonder tree search failed to improve without the Convnet pruning.
Makes me wonder if Deepmind could learn Go without first learning from the big dataset of expert games to train the convnet to prune the tree.
Which implies that Deepmind couldn't learn to play Go without first being taught by us (the expert games).
So AlphaGo learnt Go from us. It took a human brain to crack the problem of Go and the AI learned from our solutions it did not discover them itself - still a very great breakthrough.
Would Lee Sedol have won if he could use MCTS to assist his evaluation ?
( Arguably the MCTS is a non-AI component of deepmind & does not learn ).
MCTS = Monte Carlo Tree Search, where repeated random playouts evaluate moves by randomly sampling the tree of following (googolplex) possible moves.