So basically, everybody switched from expensive UNIX™ to cheap "unix"-in-all-but-trademark-certification once it became feasible, and otherwise nothing has changed in 30 years.
Apple had its current desktop environment for it's proprietary ecosystem built on BSD with their own twist while supercomputers are typically multiuser parallel computing beats, so I'd say it is really fucking surprising. Pretty and responsive desktop environments and breathtaking number crunchers are the polar opposites of a product. Fuck me, you'll find UNIX roots in Windows NT but my flabbers would be ghasted if Deep Blue had dropped a Blue Screen.
Meh, you just needed a discrete GPU, and not even a good one either. Just a basic, bare-bones card with 128MB of VRAM and pixel shader 2.0 support would have sufficed, but sadly most users didn't even have that back in 06-08.
It was mostly the consumer's fault for buying cheap garbage laptops with trash-tier iGPUs in them, and the manufacturer's for slapping a "compatible with Vista" sticker on them and pushing those shitboxes on consumers. If you had a half-decent $700-800 PC then, Vista ran like a dream.
For Windows I couldn't find anything.
If you google "Windows supercomputer", you just get lots of results about Microsoft supercomputers, which of course all run on Linux.
Microsoft earnestly tried to enter the space with a deployment system, a job scheduler and an MPI implementation. Licenses were quite cheap and they were pushing hard with free consulting and support, but it did not stick.
I think you can actually see it in the graph.
The Condor Cluster with its 500 Teraflops would have been in the Top 500 supercomputers from 2009 till ~2014.
The PS3 operating system is a BSD, and you can see a thin yellow line in that exact time frame.
How can there be N/A though? How can any functional computer not have an operating system? Or is just reading the really big MHz number of the CPU count as it being a supercomputer?
That's certainly a big part of it. When one needs to buy a metric crap load of CPUs, one tends to shop outside the popular defaults.
Another big reason, historically, is that Supercomputers didn't typically have any kind of non-command-line way to interact with them, and Windows needed it.
Until PowerShell and Windows 8, there were still substantial configuration options in Windows that were 100% managed by graphical packages. They could be changed by direct file edits and registry editing, but it added a lot of risk. All of the "did I make a mistake" tools were graphical and so unavailable from command line.
So any version of Windows stripped down enough to run on any super-computer cluster was going to be missing a lot of features, until around 2006.
Since Linux and Unix started as command line operating systems, both already had plenty fully featured options for Supercomputing.
Unix is basically a brand name.
BSD had to be completely re-written to remove all Unix code, so it could be published under a free license.
It isn't Unix certified.
So it is Unix-derived, but not currently a Unix system (which is a completely meaningless term anyway).
To make it more specific I guess, what's the problem with that? It's like having a "people living on boats" and "people with no long term address". You could include the former in the latter, but then you are just conveying less information.
Others have answered, but it is interesting to know the history of UNIX and why this came to be. BSD is technically UNIX derived, but being more specific isn't the reason why it has distinct branding. As with many evils the root is money, and there's a lot in play into how it all happened, including AT&T being a phone monopoly.
This looks impressive for Linux, and I’m glad FLOSS has such an impact! However, I wonder if the numbers are still this good if you consider more supercomputers. Maybe not. Or maybe yes! We’d have to see the evidence.
There's no reason to believe smaller supercomputers would have significantly different OS's.
At some point you enter the realm of mainframes and servers.
Mainframes almost all run Linux now, the last Unix's are close to EOL.
Servers have about a 75% Linux market share, with the rest mostly running Windows and some BSD.
I wonder if the numbers are still this good if you consider more supercomputers.
Great question. My guess is not terribly different.
"Top 500 Supercomputers" is arguably a self-referential term. I've seen the term "super-computer" defined whether it was among the 500 fastest computer in the world, on the day it went live.
As new super-computers come online, workloads from older ones tend to migrate to the new ones.
So my impression is there usually aren't a huge number of currently operating supercomputers outside of the top 500.
When a super-computer falls toward the bottom of the top 500, there's a good chance it is getting turned off soon.
That said, I'm referring here only to the super-computers that spend a lot of time advertising their existence.
I suspect there's a decent number out there today that prefer not to be listed. But I have no reason to think those don't also run Linux.
The previously fastest ran on Red Hat Enterprise Linux, the current fastest runs on SUSE Enterprise Linux.
The current third fastest (owned by Microsoft) runs Ubuntu. That's as far as I care to research.