So this is aimed at inference rather than training. Does Intel have any plans to produce chips which can scale training as well, or is that largely going to be outsourced to GPUs for models of considerable size for the time being?
I imagine models need to be deployed more often than built, but I thought that the pain point was usually the latter.