OK, maybe you wouldn't pay three grand for a Project DIGITS PC. But what about a $1,000 Blackwell PC from Acer, Asus, or Lenovo?
OK, maybe you wouldn't pay three grand for a Project DIGITS PC. But what about a $1,000 Blackwell PC from Acer, Asus, or Lenovo?
Besides, why not use native Linux as the primary operating system on this new chip family? Linux, after all, already runs on the Grace Blackwell Superchip. Windows doesn't. It's that simple.
Nowadays, Linux runs well with Nvidia chips. Recent benchmarks show that open-source Linux graphic drivers work with Nvidia GPUs as well as its proprietary drivers.
Even Linus Torvalds thinks Nvidia has gotten its open-source and Linux act together. In August 2023, Torvalds said, "Nvidia got much more involved in the kernel. Nvidia went from being on my list of companies who are not good to my list of companies who are doing really good work."
Don't get too excited -- if this goes like the last few NVidia hardware, it will:
cost too much
run a non-mainline kernel
NVidia will discontinue support for it after 3 months
Go talk to all the Jetson owners out there and see how happy they are with NVidia Linux boxes. I'll believe it when I see it (and when it is supported for longer than a quarter)
Linux, after all, already runs on the Grace Blackwell Superchip. Windows doesn’t.
And why is that?
Project DIGITS features the new NVIDIA GB10 Grace Blackwell Superchip, offering a petaflop of AI computing performance for prototyping, fine-tuning and running large AI models.
With the Grace Blackwell architecture, enterprises and researchers can prototype, fine-tune and test models on local Project DIGITS systems running Linux-based NVIDIA DGX OS, and then deploy them seamlessly on NVIDIA DGX Cloud™, accelerated cloud instances or data center infrastructure.
Oh, because it's not a fucking consumer product. It's for enterprises that need a cheap supercomputer
But arm is the most deployed microprocessor in the world? I'd much rather write arm assembly than Intel or PowerPC. For higher level languages, arm has good compiler support. Can you explain why you don't like arm? I'm genuinely curious because it is probably my favorite development environment (I mostly write embedded system software).
I'm planning on getting new pc soon. I was planning on avoiding nvidia because i had read it might be more difficult to get drivers. Does this mean they are going to improve things in general or just for the newest and likely most expensive stuff? I dont want to buy the newest possible gpu since they always have bloated price for being new and a bit older ones are likely decent enough too.
Nvidia drivers on Linux are messy and have been for a long time. It took them ages to fix Vsync in Wayland. If you want to run Linux, go AMD (or Intel).
Modern nvidia GPUs work great, like rtx 900 and newer
The main problem are nvidia legacy cards where nvidia isn’t updating their proprietary drivers and isn’t making them open source which leads to the decision to go with nuveau on newer kernels which has less features and uses more power, but is wayland compatible.
Nvidia was "forced" to integrate Linux into its ecosystem
100% bullcrap.
Nvidia's servers for data processing have always run Linux. And you know what those servers run? It's not Windows, that's for sure. So why would they write multiple versions of a driver for the same hardware interface? Their servers use the same drivers that you would use for gaming on a Linux desktop system.
In fact, no version of Windows is supported on their DGX servers, and AFAIK you can't even install Windows on it (even if you managed, it wouldn't be usable).
Long story short, a vendor we were working with (about 6 or 7 years ago now), was working on their Linux version of their SDK. We wanted to do some preliminary testing on Nvidia's new T4s that at this point were only available via Nvidia's testing datacenter (which we had access to).
During a call with some of the Nvidia engineers I had to ask the awkward question of "any chance there's a Windows server we can test on?". I knew it was a cringe question and I died a little during the 10 second silence until one of the Nvidia guys finally replied with "no one uses Windows for this stuff". And he said it slowly like the reply to such a question needed to go slow to be understood, because who else would ask that question unless you're slow in the head?
Nvidia has always been hostile to the Linux community or negligent to say the least
People say "hostile", but I think a better word is arrogant. They wanted to force the industry to use their own implementations they owned or pioneered like egl-stream instead of open standards. But AMD and Intel have proven that open source graphics drivers not only work, but benefit from being open so that the community can scratch their own itches and fix issues faster.
Yep, Nvidia has never been hostile towards Linux, they benefit from supporting it. They just don't care to support the desktop that much, and frankly neither do AMD or Intel. They often take an extremely long time to fix simple bugs that only effect desktop usage. Fortunately, in their case, the drivers can be fixed by other open source contributors.
It's not. It had nothing to do with it. Nvidia was all in with Linux as soon as they realized their hardware could be used for data processing and AI. That realization was way more than a decade ago.
Don't know about "always." In recent years, like the past 10 years, definitely. But I remember a time when Nvidia was the only reasonable recommendation for a graphics card on Linux, because Radeon was so bad. This was before Wayland, and probably even before AMD bought ATI. And it was certainly long before the amdgpu drivers existed.
Nvidia is still rather nice with FreeBSD, because their official proprietary driver there is, well, fully official, while drivers ported from Linux somewhat lag behind and have problems sometimes.
Well, it's still a modified custom distro and other distros will need to invest extra effort to be able to run there.
So, no actual freedom of choice for users again...
I've found my preferences have been creeping up in price again, but only because I've found I want an actually physically lightweight laptop, and those have been getting more available, linux-able and capable.
I only need a few hundred dollars worth of computer, and anything more can live on a rack somewhere. I'll pay more than that for my computer to be light enough I don't need to think about.
Up until the early 2000s, serial computation speed doubled about every 18 months. That meant that virtually all software just ran twice as quickly every 18 months of CPU advances. And since taking advantage of that was trivial, new software releases did, traded CPU cycles for shorter development time or more functionality, demanded current hardware to run at a reasonable clip.
In that environment, it was quite important to upgrade the CPU.
But that hasn't been happening for about twenty years now. Serial computation speed still increases, but not nearly as quickly any more.
Throughout the 80’s and 90’s, CPUs were able to run virtually any kind of software twice as fast every 18-20 months. The rate of change was incredible. Your 486SX-16 was almost obsolete by the time you got it through the door. But eventually, at some point in the mid-2000’s, progress slowed down considerably for single-threaded software – which was most software.
Perhaps the turning point came in May 2004, when Intel canceled its latest single-core development effort to focus on multicore designs. Later that year, Herb Sutter wrote his now-famous article, The Free Lunch Is Over. Not all software will run remarkably faster year-over-year anymore, he warned us. Concurrent software would continue its meteoric rise, but single-threaded software was about to get left in the dust.
If you’re willing to trust this line, it seems that in the eight years since January 2004, mainstream performance has increased by a factor of about 4.6x, which works out to 21% per year. Compare that to the 28x increase between 1996 and 2004! Things have really slowed down.
We can also look at about the twelve years since then, which is even slower:
This is using a benchmark to compare the single-threaded performance of the i7 4960X (Intel's high-end processor back at the start of 2013) to that of the Intel Ultra 9 285K, the current one. In those ~12 years, the latest processor has managed to get single-threaded performance about (5068/2070)=~2.448 times the 12-year-old processor. That's (5068/2070)^(1/12)=1.07747, about a 7.7% performance improvement per year. The age of a processor doesn't matter nearly as much in that environment.
We still have had significant parallel computation increases. GPUs in particular have gotten considerably more powerful. But unlike with serial compute, parallel compute isn't a "free" performance improvement -- software needs to be rewritten to take advantage of that, it's often hard to parallelize solving problems, and some problems cannot be solved in parallel.
Honestly, I'd say that the most-noticeable shift is away from rotational drives to SSDs -- there are tasks for which SSDs can greatly outperform rotational drives.
My line for computational adequacy was crossed with the Core2Duo. Any chip since has been fine for everyday administration or household use, and they are still fine running linux.
Any Apple silicon including the M1 is now adequate even for high end production, setting a new low bar, and a new watershed.
I bought a former office HP EliteDesk 800 G2 16GB for $120 on eBay or Amazon (can’t recall) 2 years ago with the intention of it just being my server. I ended up not unhooking the monitor and leaving it on my desk since it’s plenty fast for my needs. No massive PC gaming rig but it plays Steam indie titles and even 3D modeling and slicing apps at full speed. I just haven’t needed to get anything else.
Being blind, I don't play video games and don't do any kind of 3D graphics and stuff like that. So many, many computers would fit my specifications.
Edit: My laptop right now is a Dell Latitude E5400 from like 2014 with eight gigabytes of RAM and a 7200 RPM drive with an Intel Core i5 and it works well enough. Honestly, the only problem with it is that it does not charge the battery. So as soon as it is unplugged from the wall, it just dies. And it's not the battery itself because I've tried getting new batteries for it. It's something in the charging circuitry. It works fine when it's on wall power, but it just does not charge the battery. I figure with it being 10 years old already, at some point I will have to replace it.
I was that way for the longest time. I was more than content with my 4 core 8 thread 4th Gen. i7 laptop. I only upgraded to an 11th Gen. i9 system because I wanted to play some games on the go.
But after I upgraded to that system I started to do so much more, and all at once. Mostly because I actually could, and the old system would cry in pain long before then. But Mid last year I finally broke and bought a 13th Gen. i9 system to replace it and man do I flog the shit out of this computer. Just having the spare power lying around made me want to do more and more with it.
My phone has been my primary computing device for several years now, and so I hardly ever use my laptop anyway. So it honestly doesn't make a whole lot of sense for me to spend a ton of money on it.
My current laptop is the Dell Latitude E5400 and it has like 4 threads with 8 gigs of RAM and a 7200 RPM drive and it works well enough even though it's 10 years old. Honestly, the only problem with it is that it does not charge the battery. It's something in the charging circuitry. Since it works fine when it's on wall power, but it absolutely will not charge a battery anymore.