> Attempting to 'unixify' Windows is indeed a task often drought with much pain
I chose TDM [1] to port some utility of mine to Windows (actually the initial port may have have been done with MSYS/Cygwin). TDM is MinGW-based and doesn't try to achieve full Posix compliance. It provides Win API header files instead.
I did the adaptations for my socket-related stuff a long time ago. A bit tedious, but worth it. I don't depend on a DLL that's bigger than my program, and I have fine control on what it does.
I discovered TDM thanks to Newlisp, a Lisp dialect interpreter which has quite a few batteries included [2] (some of its feature are still Linux-only though).
Went through the same pain last year setting up a Windows machine for work.
I ended up giving up on native tools and msys2 entirely in favor of WSL2 after several months.
And then finally asking for a Mac after several more.
There are way too many bugs and quirks in every layer of this stuff, from missing features in the 'native' OpenSSH port by Microsoft to data loss bugs (!) in WSL2.
I've tried for ~1 year to like and use WSL2 on a daily basis (on two different systems none the less) because I already use Windows for other stuff, so might as well try it.
But they make it really hard to rely on. From random shutdowns, GPU unreliability to data corruption and performance differences with just using Linux directly, I had to move away from it and back to Linux because I felt like I slowly losing my mind troubleshooting these issues.
Latest issue was that the virtual disk file continued to grow beyond 500GB large while Linux still reported that it was just 20% full from the inside. So, reading about things, I gave optimize-vhd a try and boom, full disk file was corrupted.
So, gave it a try, hit a bunch of issues, left and not planning to touch it again until WSL3, and I'll be much more cautious then.
Ouch. I've been using ChromeOS more lately, and Windows less, and I'm really appreciating the Linux environment within ChromeOS. I haven't experience any of the issues you noted in this environment, but it does run as a container within a VM. As a result, the performance is not as good as bare metal, but it works rather well.
Now part of the issue is probably that WSL would sometimes hang and zombify, and could only be restarted by restarting the computer (in fact it typically did this any time the laptop woke up from suspend). So I didn't always get to cleanly shut down WSL. Additionally, I didn't have choice of filesystem— all my WSL guests ran ext4.
The issue seemed to be that sometimes files which the guest believed were written to disk were not actually written to the VHD file backing it, or the VHD file's changes were not actually written back to the physical disk. So sometimes upon a reboot (or after `wsl --shutdown`, which doesn't attempt safe shutdown IIRC), I'd lose files.
Generally it was only system files, and was recoverable by mounting the VHD for the guest to a new guest and repairing the OS in the worst cases, but sometimes did render the guest unbootable.
When I Googled the issue, I found people reporting similar problems on GitHub and Reddit going back years. On bare metal, the same distro I was running features atomic upgrades (on all filesystems), has survived power outages during system upgrades, etc.
I work in an industry where there's critical Windows-only software. Since WSL it's so much less painful since we can do everything except the specialist software under WSL (including git, SSH etc) and access the files seamlessly from both. I guess there is some unfortunate reason why they can't do the same here, it doesn't give specifics.
> I guess there is some unfortunate reason why they can't do the same here
Because they want to support Git for Windows which is built upon
Cygwin/msys. And this is fine because Cygwin is much faster than WSL
and a good Windows integration is so much easier. Would be a total PITA
if Git would only be available through WSL.
Cygwin is still the best thing ever happened to Windows (or to Linuxers
who are forced to work with Windows).
Ironically, I still prefer using Cygwin to WSL despite the impedance mismatch with Windows. I really wish that Microsoft had been able to make WSL1 work better instead of giving up and going with the VM route with WSL2.
Manager prematurely distrusting developers syndrome, WSL1 had serious MMAP issues that was delayed as "low-prio" that wasn't fixed until late in the cycle (probably long after the decision to go with WSL2 was taken). Once those were fixed a ton of "weird" bugs magically got fixed.
Sure WSL1 could never have handled docker as-is, but general software would've been mostly fine and given us damn good windows application integration. (And I'm sure some Docker shims would've appeared fairly quickly to satisfy most basic developer needs).
Most likely - just like shims popped up to share PuTTY's ssh-agent with WSL1.
These days I'm just running a full blown Linux DE in a VirtualBox VM since its graphic console isn't objectively terrible like Hyper-V. If for some reason I need to copy/sync files with the Windows host - well that's just an rsync away thanks to Cywgin.
Is it? I'm pretty sure WSL is much faster if your workflow allows putting files on Linux filesystem, but I also recall that perf in shared Windows folders felt about the same (bad, but not even worse) when I switched from msys git to WSL. But haven't used Windows in years, so memory could be bad.
> Because they want to support Git for Windows which is built upon Cygwin/msys
I read that, but I was wondering why they wanted that because they don't say. I've never noticed git to be slow under WSL but I guess that very much depends on your repo size and exactly what you are doing. It all sounds like a tremendous amount of hassle otherwise.
I agree, Cygwin did enable us to work productively for many years. But Cygwin's drive mounting modes caused lots of pain. And the Xserver is simply awful. It is/was a constant source of issues, the worst being the incredibly frustrating crashes. Maybe they were due to underlying issues in the applications themselves but those apps work fine under WSLg (of course there are other bugs but we can still work with those).
Git is very slow under WSL2 when working with our medium-sized repos hosted on Windows filesystems (required by our Windows-only tooling). It's not unusable, but 'git status' under WSL1 was almost instant, whereas it takes a good few seconds under WSL2.
It’s due to what is effectively a network file system. You have the same issue with nfs, 9pfs, parallels prl_fs, etc. It has to stat every file in the repo and that gets expensive on ang RPC protocol. Even local ones apparently.
This is made worse because git uses the os-specific file statistics including inode number to track file changes. These change over most such file systems types which triggers it to rebuild the entire file cache every time you switch OS running git commands. Which if you have shell integration is constantly.
I share my git repos between MacOS and a Linux volume and it hurts on any repo with 100s of thousands of files or more. Ceph, Linux kernel, etc.
Turns out you guys are right. I tried a bigger repo and it was slow. I tried our large monorepo and it was utterly unusable (literally minutes for git status).
I have git for windows installed and then alias git=git.exe so git from WSL just calls windows git which is much faster and I've never noticed any issues with this.
I also use WSL1 though so maybe my setup is weird.
It's been about 4 years since I tried WSL (really hoping to replace macOS as a unix workstation with a robust commercial software market) but my file access and socketed-app experience then was definitely anything but seamless.
Presumably if only 4 years ago that would still have been WSL2, but they have done a lot of improvements over the years. The audio and graphics WSLg stuff is pretty cool. Maybe worth giving it another try.
not sure about your requirements, but pretty happy over here that nowadays i can just mount sshfs within wsl and access this from windows explorer without any special hacks.
These are the notes I have for setting this up (in 2019). I am not an Administrator on this PC, it's joined to a corporate domain.
1. In the start menu search for "Services"
2. Double click on "OpenSSH Authentication Agent"
3. Set the startup type to "Automatic"
4. Click "Start" and then OK.
And after that one-time fix it's perfectly usable. It is possible I also had to do something else and forgot to document it.
The PuTTY agent can service both PuTTY clients and Microsoft OpenSSH clients, and it can do this without a service, nor does it need the implied administrative privileges to launch said service. That is a hands-down win.
The PuTTY clients can also accept raw passwords in several ways, by interactive prompt, by the -pw option on the command line (beware of exposure in task manager), and by the -pwfile option.
The latter above can be adapted to the .netrc format (used by Microsoft curl.exe and ftp.exe), which vastly expands authentication options.
So the question is why anyone bothers with Microsoft's ssh-agent.exe - it's quite limited.
I'm much less annoyed about "see[ing] UNIX everywhere" then I am about "there's a lot of specialised hardware design software that's only supported under Windows, so this isn't really avoidable".
It was hell, but perhaps out of that hell was fashioned a sort of paradise.
If the framers of the Internet had not been faced with disparate systems all speaking different languages and protocols, if they had not been introduced to computing as a babel of vendors all wanting to do their own thing, then perhaps they would not have longed for vendor-neutral, interoperable protocols, such as we have today.
The drive in the 70s to hook up very different computing platforms and have them all talking to one another, that's what gave us the Internet today. Perhaps the choices in OS have diminished since the demise of proprietary systems and bespoke OS for so many platforms, but from where I sit, I still see a wide diversity of devices all connecting to the same Internet, all speaking the same protocol, and I'm thankful for that development.
That desire to "do their own thing" moved to infect application layer, and today I have to have multiple different video chat clients installed to talk to ensure I can meet with anyone.
Unix stuff has only recently been "in the norm" for dev related workflows, it used to be entirely Windows only.
Same with office workflows, games (both playing and developing), graphics, etc.
Unix is the rebel here. It is bringing the diversity.
(Currently my workplace is probably 80% Mac, 19% Linux, 1% Windows, but I can see their point that it would be a lot easier to administrate if they were a monoculture of Mac).
Yea, let's not forget the dark times of the late 1990s / early 2000s. It was next to impossible to motivate developers to release software for more than Windows. Windows application software developers were thoroughly steeped in the platform. WinMain(), HINSTANCE, LPTSTR, CopyMemory(). Even for parts of code that did not require interacting with Win32, they'd reach for DWORD instead of unsigned int. The effort of porting these monstrosities to something non-Windows was enormous. As a junior engineer, I'd butt heads with my senior leads, arguing to keep the business logic in portable C and keep the "windows stuff" minimal and in its own cordoned-off directory. All the senior guys thought this was ridiculous--we would never have to port this to anything but Windows. Their idea of "portable" software was: It works on Windows 95 and Windows NT.
Aix, Atari, Amiga, Acorn, and BeOs were dead; Solaris, HP/UX, and NeXT were POSIX and had no applications except for highly specialised professional use cases; most of the rest were embedded platforms doing their own thing. None of them had any kind of meaningful market share. It was hell for Mac users because only the loyal developers from the previous decade remained. Windows was on something like 90% of home computers. I remember it very clearly, it was a Windows monoculture and it was fucking awful if you were on another platform.
Already your assessment regarding AIX being dead, when it is one of survivors from UNIX/Linux wars shows how your Microsoft hate blinds your judgement.
Do you know of a decent summary of the current state of AIX? I was aware that new systems were still being sold and deployed but had that filed more under "not dead yet" than "survivor" as such.
I miss the 90's. There was much more diversity: VAX/VMS systems were quite common on the Internet. On the consumer side, I knew plenty of people with Amigas and Ataris at home. By the late 90's, that was all gone.
I once built p0f on windows using msys2. But couldn't get the sockets to work. Without support for linux socket, p0f can not run as server and support queries.
I think I've noticed this conflict between msys and the built-in OpenSSH and worked around it for years without understanding why it happens.
Basically I find that my ssh agent never works unless I set GIT_SSH to the builtin one (at C:\windows\system32\openssh\ssh.exe). Once I do that, everything is happy. If I don't, git over SSH doesn't work right.
And that makes sense. When msysgit calls the Microsoft one explicitly, it uses the Microsoft version of the agent protocol.
You can use socat with either plink or npiperelay to bridge the sockets between windows and cygwin. It is my standard setup to use the buildin ssh-agent of windows 10
>so I've been working on extending our support for hardware-backed SSH certificates to Windows
Interesting work & I wish him luck. The ability to use hardware SSH certs on Windows has been around for at least a decade now, but it hasn't been a seamless experience.
The other attempt I'm aware of is PuTTY-CAC[0]. It is recommended[1] for security-conscious organizations, and approved for use at places like the US Department of Veteran Affairs[2]. Vendors[3] of smart cards also show some limited support for it.
The issue with PuTTY-CAC is that the server still needs to be configured to check the certificate against CRLs & PKI infrastructure. Over the past decade, I believe Red Hat has made some significant progress towards a workable solution[4], however, I have no experience with it.
I do the same and I wish, too, that we lived in a world without Microsoft: I'd take only Linux and Mac machines any day over having to live in a world where that turd that Windows is exists. But sadly it's not the world we live in: in the world we live in MSFT has a freaking 2.4 trillion USD market cap. so work like in TFA is needed.
At least Apple and Linux did make a big dent and the billions of Linux devices shall keep powering the real world for a very long time and it seems unlikely Mac users are going to trade their MacOS laptops for Windows turds, so all hope is not lost.
And thankfully, so far, Microsoft phones have all been miserable failures.
I do my part to fight Microsoft: I confiscated my mother-in-law's Windows laptop after she got malwared and fell for a scam and I bought her a Chromebook. My wife is more of a poweruser so I wiped her Windows system and installed her Ubuntu. I made my brothers switch to Mac computer a very long time ago: was tired of supporting their Windows never failing to get slow, catch malware etc. and simply told them: "You buy Mac computers or I'm not helping you anymore".
Microsoft is a mediocre company making the world a shittier place, but they're here to stay.
"Slow": I removed Windows on a freshly installed laptop less than 6 months ago. Its latency was noticeable in spite of running a 12th Gen Intel(R) Core(TM) i7-12850HX. This is not a case of Windows "becoming slow", but of Windows "being slow". You could argue that my point of comparison is unreasonable, because I disable so many things in my operating system that I don't use that it's no wonder. Before starting my browser, I use less than 200MB RAM. My UI response time is lower than what my eye can perceive. I don't know exactly how fast that is, but I think gamers begin to care at around 10 ms.
Malware:
> Microsoft is Slowly Rolling Out Ads in the Windows 11 Start Menu
In a world without Windows, malware would exist on whatever else the dominant OS is, and you'd still have to replace your MIL's laptop with a more obscure system.
I a world without Windows (or at least one with enough competition), software and hardware wouldn't require Windows either, since it would support multiple OSes.
And if we lived in a world where Apple was in Microsoft's place, it would be a darker world. I'm one of the original 3rd party mac developers, and despite liking their hardware and appreciating their OSes, the company that is Apple is one horrible fascist asshole. I've never published Apple software due to their Orwellian legal practices, they are flat out anti-independent developer in exponential layered ways.
> Astonishing that after more than 30 years of coexistence of Linux and Windows OSes such arrogant remarks are still being made
Why are you not astonished that, after "30 years of coexistence" people still: feel the need to buy Windows and slap Linux or BSD on it (as opposed to buying one with Linux or BSD out of the box) and are forced to buy Windows to use software and/or hardware required to do work otherwise unrelated to Windows?
I don't mind paying for tools I'm using to get the job done. But ignorant remarks in the tone of 'drop the crap you're using, no matter what it is, and install Linux' were maybe good 30 years ago, now it's just kinda disrespectful. People have various reasons to use their tools and it's not because they don't know Linux exists.
When you book a vacation and the hotel comes with a buffet, you know you're not getting the best restaurant in town, but you might opt for it because it's easier: You don't have to make a bunch of conscious choices and incur risk, you let someone else make that choice for you. When the sausage tray is always empty, you blame yourself for being cheap.
Windows is the buffet that comes with your new PC.
You didn't order ads, telemetry and questionable UX?
After Microsoft changed its strategy from FUD smear campaigns to EEE corporate assimilation, Linux interoperability has become significantly better. My colleagues all have Ubuntu terminals on their WSLs, and it all seems to work pretty great.
I don't know why Microsoft technology triggers me so much.
I have been a big fan of C#, VSCode, LSP, GitHub Actions.
I'm sure it's an instinctive reaction to their aesthetic.
I'd be surprised, real work like this is never understood by the bean counters. He'll probably get frowned upon because it took longer than the half day some pointy haired idiot scheduled the work.
Cannot get to the article, got a real lame captha.
But from the comments, is this to fix WSL ? If so, people should stay away from that and just use Linux. If the masses start using WSL, that would give hardware vendors the reason to ignore Linux on bare metal, thus no more desktop Linux.
WSL is basically an attempt from Microsoft to avoid the encroachment of alternative Operating Systems in big corporations starting from the software development teams.
As more and more corporations are using open source development tools instead of being tied to the Microsoft Eco-System, Microsoft decided that they needed to turn the experience for those developers less miserable, so those corporations could stay firmly entrenched on enterprise windows licensing contracts.
Also, Azure didn't stand a chance if it was a Visual Studio only deployment destionation, so this is yet another reason to provide better compatibility for open source development tools in the windows desktop.
Git users with complex login needs should use OIDC. In order to do "enterprise" hardware SSH credentials, you already have the rest of the "enterprise" "OIDC" (not really OIDC, but part of the sales meaning of it) infrastructure. I believe you can wire it up that way, and not deal with SSH agents at all.
If that project https://github.com/buptczq/WinCryptSSHAgent had a pin timeout, it would be the perfect Windows ssh agent. It support named pipe, pagent shared memory and a UNIX socket under WSL2 using Hyper-V and socat.
The root of the problem is attempting to avoid porting git to Windows properly, which leads to increasing contortions once you get past the easy stuff.
So for those that only skimmed, the problem here is bridging the gap between native Windows and Cygwin's POSIX emulation (which is a "terrible impedance mismatch"). This is made more difficult because Cygwin's Unix socket emulation is a "kind of awful"[0] undocumented protocol.
> Attempting to 'unixify' Windows is indeed a task often drought with much pain
I chose TDM [1] to port some utility of mine to Windows (actually the initial port may have have been done with MSYS/Cygwin). TDM is MinGW-based and doesn't try to achieve full Posix compliance. It provides Win API header files instead.
I did the adaptations for my socket-related stuff a long time ago. A bit tedious, but worth it. I don't depend on a DLL that's bigger than my program, and I have fine control on what it does.
I discovered TDM thanks to Newlisp, a Lisp dialect interpreter which has quite a few batteries included [2] (some of its feature are still Linux-only though).
[1] https://jmeubank.github.io/tdm-gcc/ [2] http://www.newlisp.org/