Do you have any sources for this? I worked at a company developing NFV appliances, we always had much higher network throughput on VMware than we did using KVM without using some type of convoluted vswitch alternative or PCI passthrough.
VMware isn't just a hypervisor, it's an entire ecosystem of VM management and orchestration. You can tie it into AD, delegate different permissions and roles to users/groups, manage upgrades, interact with PowerShell and other APIs, it has integration into Dell and Cisco solutions, all sorts of additional features you won't find running CentOS and KVM without adding more 3rd party software on top and cobbling it together.
This is it, really. For big companies this kind of stuff is important.
And "cobbling it together" is very much understating the effort involved to keep it running: eventually you'll upgrade one of the components and it will break something, because you didn't read the release notes of an upstream dependency that mentioned a breaking change that affects your particular setup.
Having the vendor (vmware) provide this as a delivered, tested, supported solution is so much easier.
Similarly, I suspect it would be significantly simpler to find IT firms and/or hire individuals with VMware knowledge than it would be to find the equivalent on KVM + Cockpit + the dozen other components you need.
Note: I'm not saying this is right or the way things should be, but simply pointing out the "enterprise" perspective. Boring technology is safe.
I have in depth knowledge of kvm and the issue isn’t kvm it’s everything else.
Start comparing VMWare against Proxmox, which is an out of the box solution anyone can use and includes every single feature of ESX and many vsphere features youd easily lose your shirt for. https://www.proxmox.com/en/
heres an independent performance test. KVM is easily faster than ESX.
There are plenty of features Proxmox doesn't have that VMware does have. I've ran into a few of them
1. No ability to pin vCPUs to physical CPUs from within Proxmox. You have to drop to bash and set affinity for each vCPU's PID by hand if you want that.
2. You can't provision a VM with more vCPUs than physical CPUs. For example if I have a host with 8 cores, the max vCPU I can allocate to a VM is 8. And yes, I did have a use case for this.
3. You can't configure networked serial ports from within Proxmox. You have to drop to bash and edit the vm configuration file by hand if you want that.
4. Lack of serial port concentrator, which means you can't really use networked serial ports reliably when migrating VMs across hosts in a Proxmox cluster. In the NFV world this can be pretty important.
5. You can't manage multiple Proxmox VM hosts from a single UI unless they're clustered, which in many cases isn't practical to do. vSphere will let you manage multiple independent hosts from a single pane of glass.
6. (at least historically) lack of RSS/multiqueue in virtio networking. vmxnet3 on VMware supports this and allows you to scale better. But I will admit it's been several years since I've had a look at this area.
Again I'm not a VMware cheerleader. I'm sure I could generate a list like this for what Proxmox has that VMware lacks. But it's incorrect to state that it includes every single feature of ESX.
Expensive as hell though.
“No one ever got fired for buying IBM” applies equally to big enterprise SaaS providers and companies like Oracle, Microsoft, VMWare, Salesforce, etc.
It is also deployed in many many enterprise and government settings.
Also, notice when you go to VMWare’s home page you see “referenceable clients” - ie well known companies that use the software? This is “Enterprise Marketing 101”.
Besides, I can throw a stick and find someone who knows VMWare. As a (hypothetical) CTO of a non tech company, I don’t want “support”, I might want an MSP to do it for me.