ActiveX deserved abandonment, but anyone who remembers those years probably also remembers that Microsoft is capable of supporting NT on many different architectures.
Potentially capable; their support for non-x86 has always fallen short.
If you look at e.g. Linux or BSD distributions, the entire world is rebuilt for every architecture. Running Linux on powerpc, arm, amd64, I get the exact same experience across the board as x86 bar platform-specific bits like openfirmware/efi tools. Microsoft has never done this. The vast majority of their stuff remains x86 only, making arm and even x64 second class citizens, with x64 only being viable as a result of the x86 compatibility. Until Microsoft start building and providing every binary as a native build, and providing the tooling for others to do the same, they will remain wedded to x86, and I'll be unable to take their support for other platforms seriously.
If a bunch of volunteers can manage to provide over 20000 software packages for over 10 architectures, totalling over 500000 binary packages, it's entirely possible for Microsoft to support three. When I used to maintain the Debian build tools, it took 18 hours to rebuild around all 18000 packages; compilers, kernel, tools, applications, everything. It would be much faster on a current system. It's all possible from a technical point of view.
> Running Linux on powerpc, arm, amd64, I get the exact same experience across the board as x86 bar platform-specific bits like openfirmware/efi tools.
This is demonstrably not true, there are plenty of ports that end up being Intel-only, and plenty of architectures that involve some sacrifice in terms of software choice if you want to run them. (as a SPARC and Raspberry Pi user I could elaborate, but hopefully you get the idea) Not that there's anything wrong with that.
> Until Microsoft start building and providing every binary as a native build, and providing the tooling for others to do the same, they will remain wedded to x86, and I'll be unable to take their support for other platforms seriously.
What you take seriously is your business, but the standard you're holding Microsoft to is one Linux doesn't meet, let alone BSD, and it's completely arbitrary regardless. If their ARM platform does what it needs to do, it doesn't especially matter if it offers support for (for example) legacy Windows cruft.
> There are plenty of ports that end up being Intel-only .. as a SPARC and Raspberry Pi user
Sure, but you can compile failed builds of software yourself and get that tool on ARM or SPARC. ARM works pretty well on Debian, so long as your not on Raspbian & using their ancient repos with broken software. Part of why I've avoided Raspberry Pis entirely.
A good point, but their repos are so eccentric because Debian refused to support the RPi's older ARM architecture and the RPi community had to roll their own. That speaks to the original point of this subthread, I believe. Not only does Linux not offer the "exact same experience across the board," it cannot even do it for the ARM architecture. For decent enough reasons, really, when you think about it.
Different versions of the ARM architecture with different ABIs. From the Debian POV that's a separate architecture to support. It could have been done by Debian, but was done by a third party, just as other minor platforms are supported.
I don't agree that it has a "different experience" because the tools and infrastructure are there to build the entire distribution from scratch. And this was done. I've done it myself several times. Once done, this architecture variant had the complete package set available for all the official supported architectures, bar any architecture-specific packages being added/omitted. Third parties can and do bootstrap and maintain entire architectures. I can't speak for the raspbian people and their port, but it's not hard to manually bootstrap the kernel and toolchain and then set an autobuilder loose on the entire source archive.
And that's the point I was trying to make about Windows; that's exactly what you can't obtain. Be it the old NT ports or the present day ones, outside the base "Windows" release and some key products, the rest of Microsoft's product portfolio is largely missing.
Yeah, it sounds like we're talking more about the difference between the limitations of open source and closed source than any particular failure on Microsoft's part to meet the expectations of its customers.
I don't think they're going to really square the circle in the way linux can, having everything available because everything is open source. On the other hand, it's not inconceivable they could end up with a server platform that does offer an awful lot of Microsoft stuff as open source, which also makes available all the linux userland stuff you'd want to have. Their handicap might be that not enough of their infrastructure software has made it to .net yet, but they've shown lately that they're willing to do some porting when it's appropriate. (Sql server!)
Only a select few boards were based on ARMv6, the Raspi 2 and above use a modern ARMv7 core. Running Raspbian vs Debian on a Raspi 2 or 3 shows off the massive performance gap between the two, IMO they should have used a single ARMv7 core from the get go.
But the Raspberry Pi is a fundamentally flawed platform, with poor I/O, binary blobs required to make the hardware function, and a community that is toxic towards free software, with its own vaguely supported distro.
An OrangePi Zero ($7) or OrangePi PC Plus ($22) will blow a Raspi out of the water any day, due to each USB port and the ethernet port being directly wired into the SOC, allowing 40MB/s per port. Plus, I can run kernel 4.10 and mainline Debian on it without any blobs, and the only things I'll miss out on are GPU support & WiFi. The VPU has been reverse engineered though, so H.264 & H.265 video works well.
It's close to two decades now but I don't recall Microsoft promising that all the NT architectures would offer the exact same experience. It would have been silly then, and it would be doubly silly now that plenty of the stuff in Windows is legacy cruft.
Pretty much, Debian moved forward to recompiling for ARMv7 as it netted significant performance improvements, and if you wanted ARMv6 binaries you could compile them yourself. Raspbian did this, but not very well.
I'm not referring to that type of support; this is merely support for the base platform. I'm referring to the entire ecosystem of Microsoft products, of which you'll find most are x86 only.
When you say "there's no money to be made...so Micosoft has no reason to bother", this attitude is a major reason why ia64 failed, and why their previous arm attempt failed, and why their current arm attempt is also likely to fail. If the software isn't there, it's a poor proposition for most customers.
When I run Linux on ia64 or arm, I have an entire distribution's worth of software at my fingertips, and for the most part I'm not losing out compared with more popular architectures. With Windows, no matter how technically good the base platform may be, the ecosystem is a wasteland and will remain so until Microsoft put the effort in to support them properly.
Supporting multiple platforms is not expensive; it's simply a matter of having the build infrastructure in place. In Debian we had it build every package automatically built on 11 platforms. Microsoft could do the same for their applications. For example, see https://buildd.debian.org/status/package.php?p=okular&suite=... -- one package built for 22 platforms. Building for three or four is not a lot to ask...
Yes, but none of this thread is about this specific platform and its merits, it's about the different strategies for supporting multiple platforms, and where Microsoft through the choices they made failed to realise their own full potential on non-x86 platforms while other organisations managed to fully support them.
Sometimes I wonder if IA-64 was just an exercise in killing of Alpha and HP-PA...
Anyway, x64 succeeded because instead of producing something no one asked for, and poorly (IA-64), AMD went to Microsoft, found out what they wanted from a 64-bit chip, and built that.
If Intel had transitioned their processor line to IA64, without AMD to defy their roadmap, do you really believe consumer desktops would magically start using other vendor processors?
> AMD went to Microsoft, found out what they wanted from a 64-bit chip, and built that.
Because they still had the cross-license deal with Intel that allowed them to legally build x86 clones.
> I'm not referring to that type of support; this is merely support for the base platform. I'm referring to the entire ecosystem of Microsoft products, of which you'll find most are x86 only.
Windows Server comes with IIS and other services that would normally have been provided in a Linux environment by the Linux ecosystem. Also, as the article notes, SQL Server and Visual Studio were supported on Itanium as well.
As Itanium only succeeded as a server product, there's no business reason for Microsoft to have ported their desktop applications to it.
Those are just a select few products out of thousands of tools, applications and services I might need to run. The server vs desktop distinction isn't very important. What matters is the utility and hence viability of the platform as a whole. By not having the platform be generally usable, it greatly reduced its desirability and reach.
Any considerations for such a server/desktop split certainly should not apply to arm, which can be used for either. Also, contrast with the experience of ia64 on Linux, where I had the full set of tools, services, applications available. That's the sort of experience Microsoft should have provided, but didn't. And should also be doing for arm, but aren't there either.
As long as the non-x86 architecture is both 64 bit and little endian, I think this is true. However, in the absence of those two properties not at all. There are many mainstream linux code bases which are either inconsistently endian-clean (meaning they are for some operations but not all) or are straight up broken. This becomes really visible debugging bizarre, impossible bugs on BE architectures.
Not true. When multiarch mattered (NT 3 and 4), Microsoft was the only vendor that delivered the exact same OS, device support and development environment across x86, Alpha, MIPS and PowerPC, and pushed the industry towards standardization. Microsoft has always taken arch and platform independence seriously, and this was evident even in 2010 when I worked on NT. You could rebuild the whole system for any supported arch (x86, x64, ia64 when that mattered), now arm and arm64
It's funny, but Microsoft was spot on correct to continue supporting 32-bit x86 on par with x64. Now they can just support BTing 32 x86 on ARM64 instead of being forced to support 64-on-64, which would simply involve more overhead.
Great points. It is an odd fact that Microsoft generally avoided writing operating systems on or for Intel x86 processors [1], and started the development of both NT and CE on Risc processors.
Meanwhile, Linus had a PC with an Intel 386 processor, so that's what he started Linux on and for...
[1] MS DOS was based on code that Microsoft bought in, not having time to develop it from scratch. However, Microsoft did have some success on x86 with its PC version of Unix, which was called Xenix.
Not that odd. The NT group specifically wanted a portable design, and they made the right call to initially target i860, MIPS and only then i386 machines. Contrast that to OS/2 development - the (never shipped) PowerPC OS/2 port was based on top of of an IBM fork of Mach, because the kernel was too x86 specific (derived from the pre-virtual memory 16-bit protected mode code).