I am not highly clued in to XLA as it's new, quite experimental, and most honestly I've just not looked at it in detail. Given XLA provides compilation, JIT or ahead of time, it doesn't really (yet) factor in to the dynamic graph discussion.
What would theoretically be interesting is a JIT for dynamic computation graphs. Frequent subgraphs could be optimized and cached and re-used when appropriate, similar to a JIT for Javascript. No doubt they're already pondering such things.
https://www.tensorflow.org/versions/master/experimental/xla/