I recently realized that 100% of what I use Windows for was as a WSL2 foundation: It had been reduced to being an extremely overbearing and heavyweight host machine for a Linux VM. Nothing in my life was Windows-only anymore, and it was basically just inertia that I even still had it installed.
I'd been a "Windows guy" for decades, had decades of Windows software dev under my belt, even got my MCSE, MCDBA, along with other Microsoft certs, and even wrote for MSDN Magazine. No longer did it have any leverage on my profession at all, which was shocking to me.
The next day I purged Windows from my two main working machines, so now I'm pure Linux and macOS. A few weeks later and I can say it has been a marvelous transition, and cuts out the no longer relevant middleman.
_However_, still find the Linux desktops that I've tried are too buggy. While the hardware support is incredible (compared to Windows out of the box), I constantly hit bugs with fractional scaling on multiple monitors. I'm hopeful that Ubuntu 26.04 may finally iron out the last problems with this. The latest version of Fedora I installed did fix all this but I'm far too used to Debian based OSes.
Well, even Microsoft uses React Native for a lot of Windows-only apps.
Thus React Native is the only alternative left those teams had to have some kind of nice GUI design experience alongside C++/WinRT, when using WinAppSDK, think Qt with QML/C++.
Agree on the Flutter comment.
I know I'm beating a dead HN horse here, but how the hell is it possible that megabytes of embedded JavaScript in websites, to the point that LinkedIn uses about half the RAM I had for my first computer loading 32 MB of JavaScript files. Jjira loads 50 (!) MB of javascript, enough to include the uncompressed version of War and Peace 16 times. That's the size of a full installation of the entire Windows 95 operating system. GitLab's static landing page is 13MB of JavaScript, larger than that of Twitter.
What the hell are we doing here? Why can I spin up a 100 MHz, 8MB RAM VM with a hard drive size that's 1/16th of your entry level MacBook's RAM and have apps open immediately? I understand some backsliding as things get more high level, but a website loading 50 megabytes of JavaScript, to me, is like a bright neon sign that screams "something has gone wrong terribly here". Obviously programs, websites and operating systems have become incredibly more complex but your $200 A-series Samsung phone has about 8 cores at 2.2 GHz each. A $200 processor when Windows 95 was released had one core at 0.1GHz. making the processing power about 164x faster. Keep in mind this $200 includes a fully functioning computer in the palm of your hand. The actual CPU part for a midrange phone like the Samsung A15 5G is the Dimension 6100+, which costs all of $25 bucks.
There must be some sort of balance between ease of prototyping, creating and deploying an application without bundling it with Electron or building websites that use tens of megabytes of a scripting language for seemingly nothing. Especially when we can see that a fast, usable website (see this very website or the blogs of many of the people who post here, compared to Reddit for your average medium or substack blog).
How the hell do we fix this? The customers clearly don't want this bloat, companies clearly shouldn't want it either (see research that indicates users exposed to a 200 millisecond load delay on Google, performed 0.2-0.6% fewer searches, and effect that remained even when the artificial delay was removed. This was replicated by Microsoft, Amazon and others. It's frequently brought up that Amazon has said that every 100 milliseconds of page load time cost them 1% in sales, though it's hard to find definitive attribution to this), programmers should hopefully not want to create crappy websites just like mechanics should hopefully not want to install brake pads that squeal every time the user breaks.
This got way longer than the two sentences I expected the post to be, so my apologies.
[1] https://tonsky.me/blog/js-bloat/ [2] Velocity and the Bottom Line, Shurman and Brutlag
Both of these statements are false. If this was really the case, then competing company/dev could implement native counterpart and just siphon all the users – I've only seen this happen with CLI tools (e.g. esbuild, rollup, uv, etc.)
citation needed. the customers clearly want it, for example most programmers chose VS Code over a native app
VirtualBox has really broken graphics support, you can only run software rendering Linux DEs that way.
There are projects like this https://github.com/jamesstringer90/Easy-GPU-PV that you can use to setup a Hyper-V machine to use the same paravirtualization.
> Automatically Installs Windows to the VM
This seems to be focused on Windows guests. I need this for Linux guests.
May be when Hyper-V will make this usable out of the box for any guests, I'll take a look. And if it's something shareable, may be VirtualBox should pick that up.
I'm not using X, I need a normal KDE Wayland session.
I've tried Optimize-VHD but renewing the VM this way frees up disk and speeds up the VM as well. None of the WSL settings for sparse disk / disk shrinking seem to work well.
Here's what I usually do
$ tar -czf /mnt/c/Temp/home-backup.tgz $HOME
$ apt list --installed > /mnt/c/Temp/packages.txt
delete the VM, create a new one, reverse the process. $ tar -cxf /mnt/c/Temp/home-backup.tgz -C $HOME
$ apt update
$ cat /mnt/c/Temp/packages.txt | xargs apt installStill, thanks for the process you use.
My wsl is pretty long lived now, through quite a few ubuntu upgrades and installations of stuff that I probably no longer need.
No matter what you do, there will always be some weird platform detection or line termination that pops up somewhere. And if it isn’t that, it’s degraded performance or kernel-level incompatibility.
I don't do much OS level engineering these days though and would probably fire up some VMs for that.
- Yes, these issues persist with WSL2.
- WSL2 allows mounting between the system/subsystem, but there is considerable overhead.
- Using WSL for remote workspaces from the host is very much a mixed bag.
- Attempting to use WSL entirely with graphical applications has very limited/poor support.
- If you wish for VM acceleration, you have to use Hyper-V, not all toolchains work with Hyper-V and this heavily restricts the host machine.
- If you wish to do anything that crosses the subsystem and the host, line delimiters and platform detection are very error prone.
- If you accidentally misconfigure WSL2 (which is quite easy to do) the WSL userspace can have substantial access to the host files, often beyond what may be initially apparent.
- Of compatibility issues, non-standard socket implementations have caused a lot of incompatibilities with software for me.
Considering it's just a headless linux vm with some integration with the host, I don't understand what reliability, performance or beyond it could possibly introduce beyond what any VM solution provides?
There are few gotchas with WSL. I hate how by default it includes your Windows path in your default linux path out of the box. It's easy to turn off and my init scripts for any VM always do that anyway, but it's the main thing I have seen people who are new to WSL get tripped by. It's useful to be able to run windows programs from within WSL. Like all the cli tools that pop up a browser to make you login, but that's about where its usefulness ends for me.
> No matter what you do, there will always be some weird platform detection
In my experience, unless you're running a very mainstream distro (read Ubuntu) there will always be a weird "platform detection" issue. I run openSUSE on most of my devices, and used to scripts not working because "uhhh, this is not Ubuntu or Debian or Fedora". The only time I run into "platform detection" issues are scripts that assume `uname -r` of a very narrow format.
> or line termination that pops up somewhere.
That goes back to my original comment about running Windows programs from linux, or moving files to/from linux. I never run into this issue because, just like on a linux desktop, I only interact with linux through the commandline. Even on a full linux desktop, I have this mental separation between GUI apps and terminal apps.
> it’s degraded performance or kernel-level incompatibility.
The only "kernel-level incompatibility" I run into is that WSL kernel doesn't have /dev/kvm. Granted I don't do a lot of kernel module specific development, so I don't know how something like a USB device or a PCIe interface can be passed to the WSL instance. But again, I'm thinking of it, and treating it like a user-mode development VM, not a full host.
I need it to access ssh keys from my yubikey. It’s painless if you just set it up to automatically forward the device on startup.
The only difference bettween 2010 and 2026, it that nowaday instead of having a mix of Virtual Box/VMWare Workstation depending on my work system, I have WSL 2.
However my Windows develoepr experience goes back to Windows 3.1, thus I mix and match my needs, between Windows and Linux sides.
WSL 2.0, and Virtualisation Framework are really the Year of Linux Desktop. /s
The only reason I use Windows is because Nvidia drivers are easier to setup. But once I'm inside my Fedora WSL, that feels like home, not the Windows host.