WSL 2 runs on a subset of Hyper-V, and on a hypervisor, so basically yes.
However, there's some interesting things going on in WSL 2 versus a "normal" Hyper-V virtual machine. For example, a Linux distro running on WSL can (and will) use GPU partitioning (aka PCI/GPU passthrough) and a special implementation of DirectX enabling the installed video card to accelerate graphics within X and/or Wayland.
Although this feature can be enabled with a lot of hacking in both the Linux guest and vanilla Hyper-V on the host (the latter through Powershell) it is officially unsupported on Windows 10 and Windows 11, and is only supported on Windows Server.
Oh, I thought GPU passthrough was enabled on vanilla Windows 11, but I didn't delve into that feature enough. It's still extremely impressive of course. Perhaps I should write another article about graphical features.
No, PCI-E passthrough is not enabled on non-Server, and you arguably wouldn't do this on a Windows hypervisor. anyways, you'd do it with a Linux+KVM hypervisor for either Linux or Windows guests.
Using GPU passthrough, however, is allowed. WSL2 does this by using the existing Mesa/DRI/DRM open source stack, but instead of a GPU-specific DRM driver, it is one that speaks WDDM (the DRM equivalent in the Windows driver stack), and only requires a GPU-specific ennoblement package (provided by the vendor, and matches the Windows driver it is talking to; AMD, Nvidia, and Intel all ship one inside of WSL2).
> No, PCI-E passthrough is not enabled on non-Server, and you arguably wouldn't do this on a Windows hypervisor. anyways, you'd do it with a Linux+KVM hypervisor for either Linux or Windows guests.
If it was enabled on Pro, I would use PCI passthrough. I use hyper-v for a Linux dev environment on a windows workstation. My NIC supports virtual functions, so if I could passthrough one to the dev VM, I wouldn't need software bridging and that might be nice. (OTOH, I don't know if my motherboard has reasonable passthrough groups and all the other stuff that makes passthrough never work for me)
I don't want two computers for work, and I'm not getting paid enough to fight with Linux GUI. I can do all the work without Linux GUI and have a working desktop (still W10 for now). I just need a VM to get a close enough match to prod VM.
I think part of the answer is that if you're going to use both it's nicer to use Windows with Linux as the guest than the reverse. MS clearly put a lot of effort to make the integration nice and it shows. Like how Parallels on macOS makes Windows a very nice guest.
If there was software that made Windows as seamless on Linux I bet it would get a lot of use.
It relies upon SR-IOV and only several server-specific Nvidia GPUs are listed as supported.
Intel's Flex dGPUs and Arc iGPUs have supported SR-IOV for years now, but they aren't listed there. It would be super awesome if Microsoft could add it for Intel Arc iGPUs, desktop versions of Windows, and WSL2! Intel's GPU SR-IOV already works with KVM on Linux!
> For example, a Linux distro running on WSL can (and will) use GPU partitioning (aka PCI/GPU passthrough) and a special implementation of DirectX
That is still just a normal VM, but it's nice that it's automated.
> enabling the installed video card to accelerate graphics within X and/or Wayland.
nit: X and/or Wayland is not involved in application rendering at all - its applications themselves that use the GPU and its acceleration directly.
Wayland and/or X is only involved when the apps are all done rendering[0], and the display servers own rendering is the comparatively simple task of stitching windows together[1], and sometimes not even that.
0: You can send buffers early over Wayland if you also send a sync fence, but this is just forwarded as a render dependency that the GPU scheduler will patiently wait for.
1: well also dealing with stuff like color transforms which can be complex to understand, but are computationally cheap and for fullscreen content possibly entirely free.
Contributing to display servers and following kernel KMS/drm stuff, but #wayland on OFTC or #sway on libera chat are both very helpful.
Also https://wayland.app to see the current Wayland protocols (the Wayland core protocol is mainly some common primitives, most stuff is across the other protocols). For example, the sync object stuff is in https://wayland.app/protocols/linux-drm-syncobj-v1 (in many cases handled by your toolkit or WSI of choice).
Don't WSL2 and the WinNT kernel both run on top of Hyper-V, on (very approximately) equal footing? The NT kernel, of course, has all the hardware access, not necessarily granted to other VMs, such as WSL2.
There are a variety of applications that it's not really possible to run natively on Linux, for example Widevine L1. I don't think there are any applications that can't run through WSL.
Thus far I have found the native WSL2 graphics integration to be pretty disappointing in comparison to what I used to get with X-server setup. Unfortunately the legacy X implementation doesn’t have the modern API and application developers are tapering their interest in it. Hopefully the groundswell of WSL2 support will improve this in time …
As an end user, the out of the box experience of getting a blurry window with the wrong content size isn't very useful. And I've never tried to actually configure it properly.
I wonder if it's a decent experience even when configured properly. In that case, why do all the IDEs come with remote development / WSL integrations that involve running the client on the Windows side?
Agreed. No way to really test WebGPU, for instance. You can't really test GPU drivers under Linux or using the native Windows browsers. Lots of incomplete attempts to make this work, none of which are reliable or easy to use.
While I understand _why_ they did WSL2 it's pretty sad that they at the same time they just dropped any WSL1 development.
We're using a lot of WSL in CI - we're mostly Linux based, but for some stuff toolchains came up which didn't work nicely with wine (like MSVC). So for us we want a Linux system that seamlessly can execute Windows stuff in a Linux based build process. WSL1 can do that, WSL2 can be kicked into working somewhat, but needs quite a few ugly workaround as they're not sharing a process namespace or file descriptors. While the faster IO would be nice that's pretty much the only thing we'd care about - and wouldn't work here, as we need shared access to the files. And while we could access the WSL2 files from Windows side that's even slower than just using WSL1.
As WSL1 runs on the same kernel I have stuff like named pipes available. I can fully mix processes from both worlds in one workflow. _Some_ of that works with WSL2 as well, but as it's processes on two different kernels requires hidden workarounds to make that work - and not everything is working (and may be even possible).
Why would you want to mix Windows and Linux processes into one workflow on the same host?
I’m sure you’ve encountered a very niche problem that requires it, but I cannot think of any scenario where that kind of behaviour would be desirable vs splitting those workflows up.
Are you not able to have separate Windows and Linux hosts (eg VMs or containers) that are instigated to run in parallel as part of the same pipeline, but don’t rely on the same processes running in the same host?
Or at the very worst, use a TCP/IP based RPC to share state between the different hosts?
We're using OBS for building stuff (and heavily abuse it for the Linux side already) - a few things (like reverse dependency builds) make that more useful that most of the other stuff out there on complex projects. So when the requirement for some Windows builds came along (which are tiny compared to all the other stuff we're doing) we just ended up using WSL to have Windows workers in OBS. Also has some other advantages with how our cmake builds work (short version, developers can do their own bit in visual studio, and then a bit more checks run on CI where we can reuse the usual stuff without caring about Windows)
The Opensuse Build System. It's pretty good at figuring out if something needs rebuilding, so in some cases violating it and making it do stuff it's not supposed to is a sensible choice.
That'd mean we have a different compiler in CI than what the developers use on their workstations - which is not a good thing. And wouldn't be tolerated there anyway - people already had some doubts when we were wrapping IAR with wine and a bit of shell to look like a standard UNIX toolchain.
(Shout-out to IAR, though. I still think they're overpriced and you don't really need them - but if you're stuck with them their support is excellent. When an update broke license handling in our setup they didn't tell us to go away like any other vendor would've done for that kind of wildly unsupported use case, but actually made it work again in the next release. They also got us access to native Linux binaries way before they even were talking in public about working on that to play around with for our CI)
Yes, a VM with extremely tight integration with the Windows environment to make things that would otherwise require lots of time to setup a breeze. I use it as my daily driver for dev work (at work, since we're required to use Windows :( ) and to be honest it's quite pleasant most of the time.
I usually work in a VM hosted by my company. But the performance is really starting to irritate me. Been considering switching to WSL2, but last time I checked all they supported was Debian based distros, and we do all our work on RHEL8. I don't think it would matter much but it's still annoying working on an entirely different setup from the rest of the team.
How is it "tight" when even `ps` or `top` show VM processes instead of OS ones? Could you give an example of functionality that can't be done with `docker run -it ubuntu`?
I used it for a little bit (got Windows laptop, thought maybe I'll switch but no), and just hated that split brain workspace.
WSL2 is a bit like Firecracker for Linux. It's a lightweight VM, with a lot of optimizations here and there. For faster startup, lower memory footprint and so on.
PS: which means you need a lot of memory, if you use WSL2 extensively (multiple Docker containers for example). 8-16 GB on top of your usual workloads is a good starting point. Docker on WSL2 is not a lot of fun with less than 20 GB system memory.
> For faster startup, lower memory footprint and so on
Any idea how they do this? My WSL2 starts insanely fast, like about 1-2 seconds. I've never seen a linux distro natively boot that fast. Assuming they suppress any sort of BIOS startup screen for starters, but what else?
It’s a very trimmed down kernel with minimal set of drivers and modules. It doesn’t even support SD cards out of the box. That’s probably the main reason as no hardware probing/initialization delays are incurred.
At high level, WSL2 provides a single optimized VM and Microsoft-compiled Linux kernel. Optimized here means that the VM only provides a small set of devices to the Linux kernel, and the kernel operates with exact known hardware, which is much smaller and simpler compared to a full blown kernel (which detects a large variety of hardware) and fuller featured VMs (c.f. qemu emulated devices: https://kashyapc.fedorapeople.org/virt/qemu/qemu-list-of-emu...).
And when you run multiple "distributions" or instances, they all share the same running VM and kernel. So after a one-time startup of the VM+kernel, opening more distributions/instances is like starting new system containers (similar to lxc/lxd or systemd-nspawn, which are also very quick to spawn on Linux) rather than new VMs. The architecture is quite similar to Linux-on-ChromeOS (Crostini).
Actually I don't know it either. But WSL2 doesn't really start a lot of processes, like a desktop/server Linux distribution. It starts up only a minimal set of processes, and a shell.
dmesg looks like the kernel is booting up normally, so I guess they don't use some memory snapshots magic.
I guess they just tweaked the kernel and the hypervisor to start up fast. There is also no BIOS/UEFI delay.
Most if them still offer laptops with memory slots, so at least for individuals it's quite easy to upgrade (companys often don't upgrade, labor is more expensive then bto). The thinner devices often have onboard memory, but even there it's not Apple-bad.
For example Lenovo US store. Thinkpad X1 carbon has an i5+16gb as base, and i7+32gb for +289$. P15s (2x sodimm) comes with 16gb in base, 32/64gb is +199/399$. Completely reasonable.
Thankfully you can upgrade it with more memory after buying for most laptop models. I recently needed a memory upgrade for my work laptop, and all the company had to do was order the memory module.
The Toshiba (!?) laptops that got sold to one of my clients at nearly $1100/ea are pathetic. Basically the same kit you can get for $600-700 at Best Buy. The equivalent MacBook would have been miles better, in terms of build quality and specs. Although, I’m using a ThinkPad E16v2 right now that I am quite pleased with that was $899 with 32GB/1TB. There are reports Apple is getting ready to launch a less expensive MacBook soon. I think the <$1000 options for laptops is about to get rather interesting.
I’m still disappointed to see the days of easy RAM/storage upgrades have largely gone. I was initially suspicious of the reasons offered for why soldering memory directly to the motherboard is necessary, but with a few years of academic engineering experience (I don’t use much of my EE education in my day job), it’s not illogical—as frequency increases, the circuit’s sensitivity to parasitic inductance and capacitance also increases, and connectors/interfaces are a big source of parasitic effects and general nonlinearity. That said, my desktop has traditional DIMM slots, and it’s technically running faster DDR5 than any of my other devices with soldered DDR4 modules.
I charitably assume the difference is laptop’s need for greater efficiencies, but either way it would be nice if the manufacturers hadn’t instantly taken advantage of this new “necessity” to jack up prices on memory and storage quite so aggressively. It’s probably also worth noting that Apple was first to really do this widely, as far as I’m aware, with the M1 chipset. Also worth noting the M1 was groundbreaking, and it’s seemingly magical memory management made it so an Apple M device with ~60% of the memory of its x86 equivalent can perform just as well (if not better). I have an 8GB M1 Mac Mini that’s still quite functional for routine work. 16GB still provides great performance.
But then, my AMD desktop that cost ~$1200 a year ago (with a capable GPU) is sporting 48GB because I could pay a reasonable price for DIMMs that I can plug-in myself. Similar specs on a mass-market machine probably would have run that price up to near $2000.
I find WSL 1 incredibly useful. C++ and .NET compiler toolchains, ssh and scp clients, and many other command-line Linux tools are working flawlessly for me despite the fake emulated kernel lacking some of the APIs. When I develop anything related to Linux be it embedded or servers, I use WSL1 a lot.
I find WSL2 pretty much useless. When I want Linux inside a VM I use VMware which is just better. VMware has tree of snapshots to rollback disk state, hardware accelerated 3D graphics (limited though, I think only GL is there no Vulkan, but it’s better than nothing), can attach complete USB devices to the guest OS, can setup proper virtual networks with multiple VMs, and the GUI to do all that is decent, no command line required.
LBW is a Linux system call translator for Windows. It allows you to run unmodified Linux applications on top of Windows.
It is not virtualisation; only one operating system is running, which is Windows. It is not emulation; Linux applications run directly on the processor, resulting in (theoretically) full native performance.
LINE Is Not an Emulator. LINE executes unmodified Linux applications on
Windows 98/2000 by intercepting Linux system calls. The Linux
applications themselves are not emulated. They run directly on the
CPU like all other Windows applications.
Call me crazy, but I've wanted to run a linux binary natively under windows for a while now; kinda like wine, but in reverse.
Well, the other day I was browsing through the MSDN docs (as you do) and discovered that it is possible to install a "vectored" exception handler. A quick bit of test code later, and I discovered that I can trap "int 0x80" instructions using this technique--those are used by linux binaries to initiate syscalls
WSL1 felt like a useful compatibility layer for running some Linux applications in Windows. It had plenty of warts, but it quickly became my preferred command shell for Windows.
WSL2 is more capable, but it's not Windows anymore. I might as well run a proper Linux VM or dual boot. Better yet, I'd rather run a Windows VM in a bare metal Linux OS. Why even bother with WSL2? What's the value add?
GPU access. Actual graphics use is so so, but essential for doing CUDA/AI stuff
Faster file system access on the Linux side (for Linux compiles etc). Ironically, accessing Windows filesystem is slower than WSL1.
Better Linux compatibility.
Vs a Linux VM:
GPU access!
Easier testing for localhost stuff, Linux ports get autoforwarded to Windows (if your test http server is running in WSL2 at port 8080, you can browse to http://localhost:8080 in your Windows browser)
Easy Windows filesystem interaction. Windows local drives show up in /mnt automatically.
Mix Windows commands with Linux commands. I use this for example to pipe strings.exe, which is UTF-16 aware, with Linux text utils.
I think WSL2 tends to be better at sharing memory (releasing unused memory) with the rest of the system than a dedicated VM.
You can mimic some of this stuff to a degree with a VM, but the built in convenience facetor can't be overlooked, and if you are doing CUDA stuff there isn't a good alternative that I am aware of. You could do PCI passthrough using datacenter class GPUs and Windows Server, but $$$.
> I might as well run a proper Linux VM or dual boot.
Obviously you don't do the thing you're writing about. A "proper" Linux VM would incur more work for you and would be less useful. Dual boot would remove your ability to use the computer for activities that need a Windows OS. Running a Windows VM on Linux would take you down a rabbit hole of pain and annoyances, unless your use case for Windows is extremely limited.
I concur. This was my main experience with WSL1 vs. WSL2.
If I'm running Windows, it means that the files and projects that I care about are on the Windows file system. And they need to be there, because my IDE and other GUI apps needs files to be on a real file system to work optimally. (A network share to a WSL2 file system would not let the IDE watch for changes, for instance.)
WSL1 was a great way to get a UNIX-style command line, with git, bash, latex etc., for the Windows file system. WSL2 was just too slow for this purpose; commands like "git status" would take multiple seconds on a large codebase.
Now I switched back to MacOS, and the proper UNIX terminal is a great advantage.
WSL supports a kernel-based DirectX to Mesa bridge. It is better than any other VM implementation. However the latest releases caused some problems with auto detection mechanism in Mesa. Sometimes the Linux kernel module also fails to load.
You need to ensure that DirectX driver is used with tools like eglinfo. Most of the time, the main culprit is LLVMpipe software driver being used due to wrong detection.
You can't run a proper VM there though, something like a normal distro with systemd, KDE Wayland session and etc. At least from what I've figured out so far. Basically I need a normal full featured VM, not some gimped variant and with graphics acceleration preferably.
I need something that can run KDE Wayland session properly, including clipboard sync. VirtualBox has a broken clipboard (supposedly they plan to fix it), and it's only software rendering.
How does VMware implement OpenGL and Vulkan acceleration in the VM?
My guess would be something akin to VirGL where they forward opengl/vulkan commands to the host. I've never dug into the nitty gritty, but VMware workstation is recognized as a GPU accelerated app, and exposes a virtual GPU in the VM.
On the linux side, the vmwgfx kernel driver and DRM drivers are used. For X11 you use modesetting, and for Wayland whatever it auto-selects. I don't use KDE but I can confirm Debian's Gnome works out-of-the-box with the default Wayland backend (to be fair, the only other compositor I tried was Sway, which is known for being picky about the GPU used). Copy/paste works properly, as do general acceleration.
I use those VMs daily, fullscreen on a 1440p monitor, so acceleration is not optional :)
Could one mount a real coexisting ext4 partition to reduce some of the perf penalty of having to simulate a block device on top of those ugly big image files?
The article mentions dynamic memory sizing as one of the benefits of WSL2 over traditional VMs, but afaik Hyper-V supports that on normal Linux VMs too. WSLg is genuinely pretty nifty, but for command-line stuff WSL2 imho doesn't really bring that many advantages.
Some distros have better WSL support than others—some will only work with systemd disabled, others have issues with X11. Ubuntu is well supported of course, but on the RHEL-ish side I've found that AlmaLinux 10 works especially well.
I always found wsl to be a hack and not a true Linux distro, fake pid 1 not really starting or shutting down ect... Even the docker integration is really odd. I know it was fixed with wsl2 but wsl1 had terrible i/o performance.
We use it work to do things on WSL2 in Azure Desktop that don’t require a Linux VM and using Windows versions of tools that feel clunky like helm, kubectl, etc. We can easily interface with ACR and AKS this way.
Can you create multiple instances each with their own IP address like you can with Virtualbox? Networking was the reason I didn't stick with WSL2 when I tried it but that was a long time ago and it's probably improved.
Strangely, if you create multiple 'instances', they're actually all running as "containers" in the same vm with the same kernel. You can see this by running `wsl --system` and noticing that ps will show you processes from all running wsl distros.
"1.6GB on a 32GB system. [...] I think that's pretty much okay. What'a a gigabyte to spare in these days anyway? Just assume that you're running your clock app under Electron."
I guess the main issue is, that Windows filesystem APIs are slow. Windows does a lot of things when opening/closing file handles (acls, virus scanning, and many more). Unix style applications with a lot of small files just perform really poorly. That's also why npm install takes ages on Windows.
They made it much better with Dev Drives, that use ReFS instead of NTFS and disables most of the filters in the filesystem stack.
I am advising a dev team that's used to using windows to use WSL for the new NextJS app we're building.
But the filesystem performance really put a huge spoke in this. I thought everything was better with WSL2, but I was surprised to see that MS hasn't engineered some driver or something that would make this much more performant or pass-thru so that you can have a directory on windows but also have it perform really well in the VM.
Umm you must be rather inexperienced in Windows and didn't do much search for that to happen.
1. WSL2 mounts Windows drives under /mnt/ . You can just cp things
2. WSL2 Distros are exposed as Network shares. WSL installs a virtual "Linux" shell folder to Desktop and explorer navigation bar. It is hard to miss. Moreover a simple search query would show you \\wsl$ share
I will start by saying I haven't used Windows as my full time desktop in 20 years. I did use VS/Windows for 2 years while I did a C# project in 2013-2014, managed a bunch of windows servers, used (and liked) powershell, everything, but that was inside a VM on my mac. And the other two windows full-time devs I was helping had never used linux or WSL (and one of them was not terribly keen on the whole idea). But we were all new to WSL. I knew WSL was very easy, and even many devs at MS use it.
So to provide more detail, things were slowed down further because this was one of those teams meetings where you can't just take over, but have to tell someone to type another command and wait for them to type it. The second thing was that the user didn't tell me that they were switching to another user when escalating to admin (cuz I couldn't see that elevated system dialog in the screen share). So it turned out they had installed WSL as a different admin user, so when they went to \\WSL$ (as their original user), it wasn't showing any shares. That set off a lot of googling and claude'ing that went nowhere.
Suffice to say I was ready to end the day after that meeting :-)
A red flag early on was when a bun install took 8 mins when trying to run it on /mnt/c, when it took 200ms on my machine. So I knew there had to be some weird filesystem overhead stuff going on. So then when we got it working beautifully by just using the VM's filesystem, I was personally happy with it but the person on the other end felt this was all too cumbersome and was soured on WSL, even though I tried to explain the differences.
I kept thinking that WSL was the greatest thing since sliced bread and got the message that MS had found a way to make them work beautifully together (especially in WSL2). I'm sure I could've figured this all out on my machine in probably 10 mins.
Only if you don't use windows tooling. If you use a native windows git client over the network file share, trouble begins. Even vscode without remote wsl core.
You can put everything inside Linux, but then it's better to switch to Linux completely. Doesn't make sense to do everything inside the wsl vm.
Node/js development works really well on native windows. Some things are a bit slower, but it's not horrible.
The file operations on macOS are rather slow too. I needed to invest in some rsync-based syncing for an in-docker application build as accessing the mounted volume from a Docker container was around 20 times slower than on Linux :O
That’s another issue. The access from macOS/windows to the Linux filesystem (docker volume) is over the loopback network. Also the other way around docker bind mounts to windows/macos filesystem.
Actually, there's an interesting question - how does Wine implement (or not implement) Windows API calls which interact with filesystem features which aren't available on Linux, like alternate data streams or complex ACLs?
From a quick search, it sounds like ACL metadata is stored in a way that's specific to NTFS. Given that Wine prefixes don't tend to have their own partitions, I'm guessing that if Wine supports them, they just store the info in a config file (similar to how the registry is implemented via text files stored in the root of the prefix) and then looks up accesses from there.
Wine probably isn't something you'd want to use if you're concerned about actually enforcing full Windows security rules. Even if you were able to enforce access to files when executing within Wine itself, the wine prefix will still usually just be a bunch of 644-permissioned files sitting on a Linux filesystem that can get used like any other; the entire wine prefix is just a regular directory with regular files from the external perspective of the wider system, and nothing stops anyone from doing whatever they want that they could do to any other file with the same permissions.
However, there's some interesting things going on in WSL 2 versus a "normal" Hyper-V virtual machine. For example, a Linux distro running on WSL can (and will) use GPU partitioning (aka PCI/GPU passthrough) and a special implementation of DirectX enabling the installed video card to accelerate graphics within X and/or Wayland.
Although this feature can be enabled with a lot of hacking in both the Linux guest and vanilla Hyper-V on the host (the latter through Powershell) it is officially unsupported on Windows 10 and Windows 11, and is only supported on Windows Server.
reply