In the case of the cloud I haven't seen FIPS certification of any of the internal systems from cloud providers yet.
This need for certification needs to be balanced against usability - it's relatively easier to certify a hardware design that has no compute, or a physically isolated CPU and memory like a SEE Machine, but it's also much less useful.
We prefer general purpose machines that have the same strength barriers that we already know are being deployed to protect customers from each other, that are being probed and tested all day every day both by AWS internally and by bad actors trying to steal other people's data. The lack of known pending exploits, continuous patching by AWS and being battle tested on a massive scale is more important to us than a FIPS certification at this point. Although if AWS managed to get FIPS to certify the Nitro system it would be a big plus.
Literally everybody has been promising general purpose machine isolation for decades and basically every one of them has failed. Actually designing such a system is a extraordinary claim demanding extraordinary evidence. Actual standards that can actually distinguish such a property such as the Common Criteria at EAL 6 and 7 require rigorous verification work such as formal proofs of correctness to actually positively assert such properties. It is ridiculous that people keep believing such claims without any guarantees, verifications, or audits when everything uncertified is so catastrophically bad at achieving isolation.
To quote Theo de Raadt:
“You are absolutely deluded, if not stupid, if you think that a worldwide collection of software engineers who can't write operating systems or applications without security holes, can then turn around and suddenly write virtualization layers without security holes.
You've seen something on the shelf, and it has all sorts of pretty colours, and you've bought it.“
Consider a scenario where you need to compute some a function of secret data, but unfortunately that function isn't of the shape that any CC- or FIPS-certified solution (HSM or otherwise) supports out of the box, and ordering one to be built against your specifications would eat through your budget for the next 15-20 years. Your choices roughly are:
- Use a HSM anyway, but only use it to encrypt the data (or its encryption key), and have your application servers decrypt, evaluate the function, return the result and delete the data from memory every time they need to? Then you're arguably just doing fancy encryption at rest.
- Use enclaves. Even though they are much less hardened than a CC certified and audited solution, let's say you assign some probability to the event that somebody compromises them and burns the corresponding 0-day on you. Would you assert that that probability is 1? If not, why not still do it?
And you can still store the actual data encryption keys in a HSM and any higher-level features that it does support! In fact, if you use your HSM in a pretty low-level way (i.e. with an interface like "decrypt data x using key k"), access to the HSM needs to be guarded carefully, and enclaves can be a candidate for that.
- Don't evaluate the function, or possibly don't even store the data in the first place. The only problem with enclaves, in my view, is if they would make people disregard this option due to the misunderstanding that "enclaves are perfectly secure". (This applies to any security technology, but the fancier/more magical a solution looks like or is marketed as, the higher the temptation.)
- Do none of the above, but still store the data and evaluate the function on your systems. This is arguably the norm today in most organizations.
Very good isolation at all layers, formal verification, highly resistant to tampering, fuzzing, and so on. Every single part was locked down in a very thoughtful way.
There may be no legal liability with the purchase contracts, but these manufacturers certainly seem to approach the problem with the level of consideration that I’d hope for given the importance of the technology.