No it's not. Only Google spends significant time with automatic architecture search, and many people think this is really to try to sell cloud capacity.
> Once the architecture is found, training and net finetuning/transfer learning is comparatively cheap
Training isn't cheap for significant problems.
Getting the data is very expensive, and compute is a significant expense for large datasets.
> This implies we could see 10-100x gains in AI algorithms using today's hardware
Actually, most of the time we see 10-100% (percent! not times) gains from architecture improvements, whether they be manual or automatic.
But that is very significant, because a 10% improvement can suddenly make something useful that wasn't before.