I'm not sure why you think I must be conspiratorial, although I will admit the thesis that 'Nvidia is an AI leader in software' is unusual, but ultimately I think well-supported by the public record and some diligent research.
I've been watching Nvidia for awhile, and one thing you notice quickly is that, much like Apple, they don't pre-announce or oversell vaporware; they tend to only announce things that they have already worked on for years and are imminently available.
> As far as I am aware Nvidia does not even run a cloud do they?
They don't run a public cloud yet, although they are making noises in that direction [1]. GPU Cloud right now is just a place where you get packaged Docker images (and then run them on AWS, GCE, what have you), but I don't think the branding is accidental—they are setting it up so if they decide to build a public cloud, ML researchers will already be familiar with the term.
They are also doing distributed cloud GPUs direct to consumer via Cloud Gaming [2].
Internally, they have gone the HPC/supercomputing route to develop their own ML stack, rather than Google/MS/AWS hyperscaler route [3]. They basically built their own supercomputer based on Voltas, and they use it internally to do everything from developing self-driving car software [4], including the simulation platform.
Note that AFAIK, the simulation platform is far ahead of other players in the field. We have heard time and again that 'data' is going to be the competitive advantage to Tesla (miles driven) and Waymo (mapping data). What if you can partially sidestep the issue by leveraging the ability of humans to actually define dangerous scenarios and rigorously test them outside of the constraints of road driving?
The platform literally has literally built the idea of 'regression testing' and translated it into the ML space and they are planning to deploy this into production systems in the next 1-2 years. From what I've heard from ML researchers, the end-to-end testing and deployment of NNs is still rather in its infancy, in terms of being able to change your network and then do mass inferencing on prior 'test cases' that you think are important.
> Google Fiber was NOT about cost. It was about AT&T and other established players with some local governments making it difficult for Google to access what they needed to be able to compete.
You are defining 'cost' far too narrowly, or rather not seeing how non-economic costs eventually translate into economic ones. The established players made it difficult for Google. This eventually translated into 1) higher legal fees to fight them 2) slower deployment rates and 3) higher operational costs for expansion. All these things obviously cost lots of time money and sharply lower the overall ROI of a project, hence why Google has essentially given up. There's only risk, no reward.
The point is not to compare the TPU project directly to Fiber (the two projects are very different), but just to address your point that 'cost doesn't matter to Google because they have a lot of money'. Companies that truly don't care about cost will very soon end up with very little money. Put another way, I don't think the eventual reward from continuing TPU development will be more profitable than simply buying GPUs from Nvidia down the line.
> Now you have to go after memory access.
Nvidia might be better-positioned to optimize memory access than Google is, they have their own fabric and work with a large variety of partners to optimize their ML/DL workloads.
> Your post is all over the place so a bit hard to respond.
Well, the crux of my argument is that:
1. Chip development is an expensive business
2. Nvidia is good at building chips; the Volta is already within striking distance of the TPU using only ~25% of its die area for tensor units. As NNs grow, inter-node scalability will become more important, and Nvidia has large advantages in interconnect that will show up in large-scale deployments (like supercomputers, where I expect a lot of DL to happen)
3. Google's business strategy only allows it to spread development costs over its own deployment, while Nvidia lets many other players pay for the dev cost, including competing hyperscalers, HPC, gamers, and carmakers. Nvidia's potential 'ecosystem' is much larger than Google's. Historically, we've seen that structural advantage be very hard to surmount.
1-3 means that in the long run, a 'go-it-alone' strategy like Google's is unlikely to win a protracted R&D fight.
> Google solved Go a decade early. Hinton did the Capsule networks and basically the farther of DL. Well made it actually work. What breakthrough came from Nvidia?
Yes, Deepmind has made some great strides, but how does that directly fund TPU development and give it a competitive advantage? The fact that those papers are published means that any talented researcher at Nvidia can replicate the work, then run and optimize it on their GPU architecture.
> There is so much crazy stuff in your posts this must be driven by something else and something emotional? Your points are just not based on reality. Is this really about Google firing Damore?
I'm not sure why you are so convinced that only a crazy person with a beef about Google can have a differing opinion from you. Do you work on the TPU team or something?
> Google will release the gen 3 and then share a paper on the gen 2 and we will see Nvivida then try to copy that one. Nvidia always a couple of steps behind.
Where's your evidence that Nvidia is simply copying Google, rather than both engineering teams viewing the same problems and converging to similar solutions?
Note that even if it is true that Nvidia is simply 'copying Google', they have the resources to beat Google it its own game, by leveraging process, memory, CUDA, etc. You've studiously avoided addressing this point.
[1] https://www.nvidia.com/en-us/gpu-cloud/deep-learning-contain...
[2] http://www.nvidia.com/object/cloud-gaming.html
[3] https://www.nextplatform.com/2017/11/30/inside-nvidias-next-...