Oddly enough, that’s exactly what I’ve been benchmarking - different ways of linking Strix Halo machines - with respect to throughput & latency.
Posted a little bit re: the TB side of things on the Framework and Level1Techs forums but haven’t pulled everything together yet because the higher-speed Ethernet and Infiniband data is still being collected.
So far my observations re: TB is that, on Strix Halo specifically, while latency can be excellent there seem to be some limits on throughput. My tests cap out at ~11Gbps unidir (Tx|Rx), ~22Gbps bidi (Tx+Rx). Which is wierd because the USB4 ports are advertised at 40Gbps bidi, the links report as 2x20Gbs, and are stable with no errors/flapping - so not a cabling problem.
The issue seems rather specific to TB networking on Strix Halo using the USB4 links between machines.
Emphasis to exclude common exceptions - other platforms eg Intel users getting well over 20Gbps; other mini PCs eg MS-1 Max USB4v2; local network eg I’ve measured loopback >100Gbps; or external storage where folk are seeing 18Gbps+ / numbers that align with their devices.
Emd goal is to get hard data on all reasonably achievable link types. Already have data on TB & lower-speed Ethernet (switched & P2P), currently doing setup & tuning on some Mellanox cards to collect data for higher-speed Ethernet and IB. P2P-only for now; 100GbE switching is becoming mainstream but IB switches are still rather nutty.
Happy to collaborate with any other folk interested in this topic. Reach out to (username at pm dot me).