It depends:
- the Hyper-V backend uses a VM for running the actual containers, which is a bit annoying because it just sits there and eats your RAM whenever it's on (technically you could enable dynamic memory for the VM, but i think it used to break)
- the WSL2 backend uses the whole fancy new system that Microsoft came up with to, idk, attempt to embrace and extend Linux or something; far less annoying than Hyper-V but also has somewhat different approaches to setting resource limits (e.g. if you only wnat to give it 4 GB of RAM or 4 CPU cores so something is left for the rest of your system and doesn't slow down when you run docker build)
Honestly, Docker on *nix is a way better experience, but not everyone can run it for a variety of reasons (e.g. corporate policy or having the same PC for personal development and gaming).1. We were all spending 6-8 hours per week trying bending our host OS to work like Linux. Containers, package managers, compatibility layers, different autocompletes, etc... all do create work. All the adaptations to avoid running Linux also often don't work identically to the real thing (i.e. VM CPU & memory allocation vs. Linux container CPU & memory allocation, installing Postgres on MacOS vs Linux) so there's lots of little learning curves.
2. We spent a lot of time learning how to do things in Windows and MacOS that did not apply to production. Now all the learning "how to make x work" applies to development and production, too.
Incidentally, running Podman or Docker on Linux is a much better experience that the simulation via VM that exists on Windows and MacOS. Another plus is that containers behave for developers like they will in production, leading to better decisions about using containers.
This is probably the most important point and something that I can wholeheartedly agree with! On Windows, you get the occasional issues with bind mounts or file permissions (e.g. wanting to use a container that has SSH keys in it, but software complaining about 77X access, instead of 700, which you literally cannot change), or even networking.
Whereas everything "just working" on *nix is great.
WSL2 is no different than using Virtual Box or VMWare, now shiiped by Microsoft instead.
As others pointed out, Linux Desktop is never going to happen beyond the usual 1%, other than Android/ChromeOS workloads.
Typing this from a Asus 1215B, probably the last GNU/Linux device that I will own.
Many tried and went back to OSX die to the rough corners of desktop Linux.
But for personal usage, Linux is more than acceptable - if you can grokk all of its pain points, that is. Linus Tech Tips did a few videos on the topic recently, it was painful but understandable to watch. Personally, Ubuntu LTS (or equivalent boring long term support distro) or something with XFCE is solid and really usable, especially if you intend to do programming, where things are weird on Windows sometimes.
Except for gaming. Proton still has a ways to go and Wine isn't optimal for that sort of stuff and neither Linux or OS X are worth supporting for many games/projects/software because they represent a small part of the total userbase. For example, in regards to games: https://store.steampowered.com/hwsurvey
Windows 96.68%
OSX 2.20%
Linux 1.12%
Supporting any system that's not Windows for games would be like burning money. At that point, you might as well offer the game for free on those systems (if using an engine like Unity/Unreal/Godot, where builds are easy) but refuse bug reports on them, if you don't have the resources for that kind of support.Yes, but wasn't the main selling point of Docker "no more VMs"? It solves the same issues, without the drawbacks of running virtual machines.