Everything you say makes sense. Training is definitely more compute intensive than inference.
Training is both memory throughput and compute constrained. Much research in speeding up training goes into optimizing HBM to SRAM communication. The equivalent for your chips would be communication from the SRAM of one chip to the SRAM of another, where it sounds like your architecture has a major memory throughput advantage over GPUs. So I assume you don't have a proportional compute advantage?
By the way, it's great to see a non von Neumann architecture showing a major performance advantage in a real world application. And your chips are conceptually equivalent to chiplets; you should have a major cost advantage on bleeding edge process nodes if you scale up manufacturing. Overall very impressive!