If your spark jobs are mostly batch workloads, that can tolerate moderately infrequent failures and restarts, try using google dataproc with preemptible vms or amazon emr using spot instances.
Depending on your use case, you might spend many times less than you would using regular VMs. Many instances that are several dollars an hour on AWS can be used for a fraction of the price.
Its also fairly easy to automate the region selection and bid (on AWS that is, not sure about gcloud).
If you need streaming, obviously this might not be the way to go.