WSL1 was pushing the boundaries of OS research: - a method for having multiple syscall interfaces in a mainstream OS - processes in WSL1 were real NT processes (even if lacking some of the NTOS environment) - direct integration with the rest of the OS without an awkward VM separation layer.
In comparison WSL2 is basically an optimized VM with some fancy guest additions. Color me underwhelmed.
I understand the argument that WSL2 is faster than WSL1 in file system operations. I expect this will only be true for their root file system ("VolFs") and that performance will remain same or suffer for Windows drives ("DrvFs"). I am certain that they could fix "VolFs" performance by moving the file system of NTFS and into a raw disk partition or VHD. (Note: I write file systems both in and out of kernel.)
Finally WSL2 will be distributed with Windows which raises some licensing questions (IANAL) if not in the letter of the GPL license at least in spirit. I write GPL'ed software myself and I would be somewhat miffed if I saw my software used in a similar manner (i.e. "via a VM", but still distributed with non-GPL code).
I think they found the boundary of OS research in this case, and a better product is using the actual Linux kernel.
I think this comment may have been disingenuous on their part. The reason is that this problem more than likely still exists in WSL2 for the /mnt/c, /mnt/d file systems (i.e. what they used to call "DrvFs" in WSL1).
WSL1 comes with (at least) 2 file systems. "VolFs" which is the file system that they use for the Linux root file system and "DrvFs" which is the file system that they use to access Windows drives (C:, D:, ...).
In WSL1 VolFs was implemented as a layer on top of NTFS, so it comes with all the Windows file system and NTFS baggage. In WSL2 they will replace this file system with a native ext4 formatted partition on a VHD file, thus eliminating the Windows I/O stack (except for READ/WRITE I/O to the VHD file).
My contention is that they could have instead replaced VolFs with a native WSL1 file system that uses a disk partition or VHD as its backend storage, thus eliminating the Windows I/O stack in the same way. They could then have implemented proper Linux file system semantics without any baggage.
> I think they found the boundary of OS research in this case
Unlikely. It would not surprise me if the changes we are seeing are less technical and more political.
But that would require them to implement a new file system from scratch wouldn't it? VolFs in WSL1 relied on NTFS to do the heavy lifting.
Far easier to just let the Linux kernel handle it.
Mere aggregation of non-GPL code with (“distributed with”) GPL code expressly is consistent with the GPL, it is contrary to neither letter nor spirit of the license.
https://www.gnu.org/licenses/gpl-faq.html#MereAggregation https://www.gnu.org/licenses/gpl-faq.html#AggregateContainer...
I do not know, but I can see arguments on both sides. This is why I would love to hear the opinion of the FSF on this.
Has it? WSL1 and WSL2 seem to be parallel alternatives, the latter isn't replacing the former now, and it's not clear that it is intended to.
I asked the same question on their GitHub issues: https://github.com/microsoft/WSL/issues/4022
Switching to a virtualization model feels like a step backwards. If I wanted a virtualized Linux on Windows, I’d run a virtual Linux on Windows. WSL is special because it’s a middle ground.
In WSL1 they share the same IP addresses and TCP/UDP port space, while WSL2 has an separated IP address. I suppose there is some NAT to make the network working in WSL2.
At the end of Q&A part they mentioned that sharing localhost, IP addresses, and port number space (which is a WSL1 feature) may be done in future, but they have no roadmap for it right now.
Aside from not have some Linux pseudo-filesystems the bigger issue has been the speed of file operations. I dread having to run `dep ensure` and `yarn install`.
Why not just Hyper-V? Every few weeks I try to figure out how to set up a static IP but I cannot for the life of me. So it takes 1-2 minutes every time I want to reconnect because not only does it get a dynamic IP by default, it is reset every day or so. Need to go into the VM via Hyper-V, get the current IP address, reset /etc/hosts within WSL, reset /etc/hosts within Windows, SSH into VM within WSL. It drives me nuts.
Really looking forward to WSL2 for faster file operations and being able to run all my programs.
Regarding the networking issue, I sort of solved it by using VMBus between my VM and Windows. Shameless plug: https://github.com/bganne/hvnc
WSL is what makes it somewhat bearable. I look forward to Windows Terminal and WSL2.
Basically a driver in the Linux guest (hv_balloon for HyperV, but you have the same things for KVM, VMWare etc.) can artificially "inflates" its memory use when detecting too much unused memory and give it back to the host. When the guest needs more memory, the balloon driver will give the memory back. Couple with hotplug memory support and things can be pretty dynamic.
Not sure if they do something more sophisticated for WSL2 though.
This is similar to how running a userland application on Linux doesn’t require the application to be GPL’d because it’s not directly linked to the kernel. While it interacts with the kernel it is not derived from it.
Similarly, your program might statically link against a GPL'd library, but only pass bulk data through a programming interface in a coarse manner, and the result may not be considered linking. The FSF FAQ even explicitly addresses this case.
GPL leaves these kinds of mechanisms blurry and ill-defined (I presume intentionally), it's just that as engineers we are commonly only taught how violations manifest in the usual case.
Even then there are still more exceptions for 'runtimes' and suchlike where while in a literal sense, the final assembled program when executing is linked against GPL source the result is not considered to be covered by the GPL.
None of this stuff is actually part of the license text -- it's built on precedence and common understanding. These things are a lawyer's job, we're just engineers
GPL is a clever hack of copyright law. It grant derivative works a copyright permission under the condition of accepting the same license.
Now, Windows kernel and Nvidia's binary blob driver exist without relying on Linux kernel at all. How could they be considered as a derivative work in terms of copyright law? Since GPLv2 relies on copyright law, it has no effect on situation where copyright law doesn't allow exclusive right.
For Nvidia's driver, Nvidia released a thin shim wrapper code as GPL which interface between Linux API and Nvidia binary blob driver API. But the core Nvidia binary blob driver existed independently from Linux kernel, it can't be a derivative work of Linux kernel.
For WSL2 case, Microsoft may take the same approach. They may release a thin shim wrapper which interface Linux kernel and Windows as GPL. But Windows kernel itself cannot be a derivative work of Linux kernel if it's separated well.
At least, that's my conclusion. I'm not a lawyer.
Many kernel developers are of a different opinion, and consider Nvidia in breach of their license. The fact that nobody has sued does not mean that they are in the clear. Don't consider the use of a thin shim to be some sort of license firewall.
Anyone other than a select few multinationals probably shouldn't consider legal disagreements with their partners a valid business strategy.
https://wpdev.uservoice.com/forums/266908-command-prompt-con...
"It'll be available on all SKUs that run WSL currently including windows home!"
Without that, then the main reason that I'd even be using the subsystem - GPU compute - is unavailable, and I'll need to actually boot into Linux if I want to do anything useful.
But other people seem to have a reason, based on the comments here and in every submission about WSL.
What are your needs?
Macs work really well out of the box but with some tweaks WSL makes Windows a quite decent development workstation.
Currently I'm using Windows + WSL and I'm quite satisfied. Looks like with the new version some of the tweaks to use docker are not necessary anymore. Great work, Microsoft, keep the pace.
But Windows is the absolute last place I find that. All I get are updates that break my setup, constant inane interruptions from Cortana or the desktop or wherever, advertisement tiles in the fucking start menu, forced updates that can't be done in the background, Windows Activation disappearing after hardware upgrades, etc.
I feel like I'm in the Stepford Wives with all these people coming out of the woodwork to proclaim how majestic the Windows experience is.
A few years back I switched from Mac to Ubuntu for the faster, cheaper, diverse hardware and I have to say, it's pretty much perfect in the "Just Works" department on the 3 laptops + 2 desktops I've installed it on. But I'm also one of those people that actually liked Unity so I don't have to mess with it much after installing.
Edit: Oops looks like someone rebooted my W7 laptop.. :'(
I'm using Windows 10 Pro (Insider Preview fast track) and not seeing the annoyances you are experiencing.
I do have several setups, each for a separated task. It depends on what you do. E.g. doing creative stuff, it can be annoying when you get constantly derailed by one darn ugly UI like Gnome3. Now, you don't need to dabble with eyecandy every full moon, but there is nothing wrong with setting it up once for a specific workflow, is there?
The only secret is to use hardware you know beforehand is well supported.
That is not required to use a Linux desktop.
Really? I don't know your definition of tweaks, but I've always found macOS requires a bunch of third party apps (often only paid alternatives) to be useful for a power user. Last I checked even its window management support was awful, and in a lot of situations it's impossible to get to where you want without touching the mouse.
KDE works much better out the gate, and is actually an advanced desktop environment in terms of possible customization if that is your bag.
it's called GNOME
I use Linux exclusively on all my systems, and have not had any problems at all. So I always wonder what hardware is being used that does not work...
For my laptops I use Think pads and various Dell systems. The only thing I always try to make sure when buying a laptop though is it is a Intel CPU with integrated inlet GPU -- just doing that and I have never really had any problems.
In any case, I am looking for the exact model you are using that has the problem -- so I can maybe find a cheap one on ebay and mess around with it.
It's probably more of a problem with the hardware manufacturers than Linux, but regardless the end result is that it only works on Windows.
For example, my Surface Book, when it's working, resumes instantly.
I like to record audio, and use DAWs and plugins/ VSTs.
What about artists who use graphic design or video editing programs? Is your use case the only valid one, for whatever reason?
Looks like a DAW, right? DAWs aren't really the problem - Reaper is on Linux as well, and it's perfectly serviceable. The issue is proprietary software like Helix Native/ HX Edit, and VSTs that have tough DRMs would be borderline impossible to set up. I assume.
I kinda gave up on Linux when I got my XPS 15 though, because there just didn't seem to be a way to have good switchable graphics without reboots, because that's just dumb and impractical, decent battery life, and the OS not freezing every time the touch screen was activate (mainly an issue with Ubuntu and derivatives, for whatever reason). Oh and the fingerprint sensor wouldn't work. Per display scaling seemed finicky in some desktop environments, etc.
Might try it again soon with Pop_os! or something of that sort, but I need this to be a stable machine for work, so dunno :|
And the real trouble is that you can't just go find the one that works. If your co-workers all use webex then you're stuck.
My workstation that I use almost exclusively is Mint. I am one of those people who are, to put it politely, not keen on Google's ad and data sponge tentacles in every last item on the planet, and so I use Firefox as my primary browser.
O365 has yet to show me many problems with the O365 suite. The only problem I do have is a complete and utter lack of desktop notifications. I'm late to meetings a lot. For someone who values punctuality, it's a major flaw in my view.
We also use Teams a lot, and there the Firefox + Linux story is significantly worse. You lose all teleconference capabilities - and that's true for Chrome or Chromium as well. The chat functionality is acceptable and weirdly, the notifications on Teams work without any problems. Someone on that team needs to show the O365 team how to do it.
I have a VM for when I absolutely have to do something in Windows. For everything else, there's ~~mastercard~~ linux.
Linux on desktop is a bad joke.
Also, my Ubuntu Mate has been working flawlessly on the desktop for 5 years, 10+ if you count the gnome2 days. No privacy invasion or helpful ads either.