For what it's worth, the idea of a "tensor" in ML is pretty far removed from any physical concept. I don't know its mathematical origins (would be interesting I'm sure), but in ML they're only involved because that's our framework for dealing with multi-linear transformations.
Most NNs work by something akin to "(multi-)linear vector transformation, followed by elementwise nonlinear transformation", stacked over and over so that the output of one layer becomes the input of the next. This applies equally well to simple models like "fully-connected" / "feed-forward" networks (aka "multi-layer perceptron") and to more-sophisticated models like transformers (e.g. https://github.com/karpathy/nanoGPT/blob/325be85d9be8c81b436...).
It's less about combining lots of tiny local linear transformations piecewise, and more about layering linear and non-linear transformations on top of each other.
I don't really know how physics works beyond whatever Newtonian mechanics I learned in high school. But unless the underlying math is similar, then I'm hesitant to run too far with the analogy.