You can use OpenXLA, but it's not the default. The main use-case for OpenXLA is running PyTorch on Google TPUs. OpenXLA also supports GPUs, but I am not sure how many people use that. Afaik JAX uses OpenXLA as backend to run on GPUs.
If you use model.compile() in PyTorch, you use TorchInductor and OpenAIs Triton by default.