One TPU (not even a pod, just a regular old TPUv2) has 96 CPU cores with 1.4TB of RAM, and that’s not even counting their hardware acceleration. I’d love to buy one.
A single TPUv2 chip has 1 core and 8gb of memory. A single device comes in the v2-8 configuration with 8 cores and 64gb of memory.
Pod variants come in v2-32 to v2-512 configurations.
The TPUs you rent that are being discussed here are capable of training, consume hundreds of watts and have a heatsink bigger than your fist and really spectacular network links. They're analogous to Nvidia's highest end GPUs from a "what can you do with them" perspective.
Both are custom chips for deep learning but they're completely different beasts.
As for the rest of them, list them on Amazon and let them do the fulfillment. That $10k of hardware isn't going to sell itself from your closet. (Yet. LLMs are making great strides.)