The machine on one provider might be experiencing more load than a similar machine at a different provider, biasing the results.
That being said, Vultr has been good for our purposes.
Uhh, a statement like this somewhat hurts the ability to trust the other claims of the benchmark. Sure, Linode has the lowest clock speed, but you're comparing Epyc Milan which is almost a decade newer than the Sandy-Bridge EP of Cloudfanatic for example. Unsurprisingly, this is even reflected in the one cpu benchmark run, geekbench, which Linode comes first in.
Why?
> Sure, Linode has the lowest clock speed, but you're comparing Epyc Milan which is almost a decade newer than the Sandy-Bridge EP of Cloudfanatic for example.
Sure, but also take into consideration that the Linode instance costs $48/month while the Cloudfanatic one costs $18/month. Both of them have the same amount of CPU cores too.
Seems like this is a Linode problem of offering old hardware, instead of a problem with the benchmark itself.
I’m not the GP, but I agree with the sentiment.
Clock speed alone is a spectacularly bad metric to use for a summary result regarding relative performance of different CPUs.
If one wants to assess performance on a workload of integer addition/subtraction with a linear chain of data dependencies on a working set that fits in the L1, then sure, clock speed will correlate strongly with performance (in practically every metric other than maybe power).
On the other hand the performance of a lot of real-world workloads will be influenced by things like the number of execution units per core, the types of operations they perform, the throughput of various instructions, the specifics of the front end (e.g. assuming x86/CISC, the types of uops; the size of the BTB; the branch-prediction algorithm), the size of the various caches (TLBs in addition to I$ and D$), the specifics of the coherency protocol, the memory bandwidth, etc.
All that to say that CPUs should be compared by benchmark results, not blind comparisons of clock speed, and so to quote clock speed in a summary might lead one to question the rest of the results.
Linode has the newest hardware, and the highest single- and multi-core speeds despite the lowest clocks. Mentioning the higher clock speeds instead of the benchmarks is pointless and very misleading.
To add on to what denotational wrote, an assertive and wrong statement on the cpu performance also implies they didn't research too in depth before running the benchmark, so it casts doubt on if they controlled for other factors in their test too. Like for the disk test, did they take care to ensure all the VPSes are running the same filesystem? Do any of the providers make you choose between directly attached storage and network block storage? etc.
Most VPS providers rent rack space at large colocation facilities in multiple cities. In some cases they locate right next to major peering points for very good peering and very cheap bandwidth.
Vultr in particular is massively underrated and is certainly good enough to use in production.
One caveat though: most of these providers don’t handle sanitization of storage or encryption at rest as thoroughly as the big clouds do. I’d recommend handling your own encryption at rest with these if you have sensitive data. See: Vault, LUKS, etc.
That’s why you’re paying so much for AWS.
Never heard of them running on other providers though.
Surely that would a problem for all of the compliance e.g. HIPAA, FIPS ?