When you run OpenSUSE, you can feel it was made by Germans.
The installer is a beautiful example of German engineering.
The package manager is a perfect example of German over-engineering.
If you run it with KDE, you have 2 redundant GUI admin tools for every config in the system, and 4 for setting up printers.
NixOS is for people who have accidentally uninstalled 90% of their system because they didn’t pay attention to what other packages depend on the thing they were uninstalling and were desperately looking for a an undo button.
I'm still a Linux noob all things considered, and I've been using NixOS for six months or more.
It is HARD, but I see the true value of it. I will never need to reinstall Linux because I broke it, that's simply impossible.
If I ever need to migrate my system, it's all backed up to github. With a single
Bash update.sh
every single .config file backed up, system upgraded, all packages updated.
I just love Nix, it's the perfect OS for me.
Now I just need to learn how to use flakes...
Sidebar: I've never asked before, but maybe someone can help me out. If I install a flake of an application, am I supposed to add it to the existing flake, or can I modulate flakes?
I've noticed when installing the nixvim flake it generates a new flake and it runs when I issue the
nix run ~/.dotfiles/nixvim/flake.nix
command, but I don't want to have to run that command every time. I feel like making a fish abbreviation isn't the correct way of doing this.
So I've only been using nix about a year and only used flakes. I use in two ways.
First, I have my main nix flake. Most everything is controlled from that. It has several outputs from full blown nixos builds per host or some home manager builds for non-nixos systems.
Third-party flakes I use as inputs to my own flake then use the override system to inject them into nixpkgs. Then I just install whatever like normal from nixpkgs. I can either override an existing pkg (neovim nightly replaces regular neovim for me), or you can just add as a new package to nixpkgs by using a different attribute name.
Second way is for projects with their own repo. I'll add a project flake that has a devshell with direnv so as soon as I enter that directory it sets up a sort of virtual environment just for that project. You can add outputs to it so others can use as a third-party flake.
I think I've put fedora on at least 4 personal systems and it has never caused an issue. It's so smooth it's boring in the best way. Switched to it for daily computing about 4 years ago. I use a minipc as a media server with Arch and turning it on it's exciting. Just this fucking morning the default configuration decided that my main audio device was a microphone. Lovely. So flexible.
On the other hand, my server running Arch testing has never had any issues. In fact, the only issue on any of my devices, all Arch testing, was nvidia.
This is a YMMV situation. I had Gentoo running on a minipc for a while and it never had any random issues pop up. Any screw up was fully traceable to configuration and entirely my fault. It was kinda funny. Hope your server stays healthy.
I mean, I'm on Debian and I'm on the same install instance I've had for almost four years now. I'm constantly reading about how some of you people keep hosing your other distros with a normal update...
Real. Though sometimes running a recent version of something is a real challenge, unless it ships in appimage. If it’s a small program you can usually backport the package from unstable or just build it yourself, but if it depends on some rust or js libraries or whathaveyou you have to do so much crap you might as well just be running trixie
I think the age of distros shipping severely broken updated is over. And it was always, ALWAYS grub that broke after an update on mint and opensuse 10 years ago for me.
I'll never stop hating that debian is labeled stable. I'm fully aware that they are using the definition of stable that simply means not updating constantly but the problem is that people conflate that with stability as in unbreaking. Except it's the exact opposite in my experience, I've had apt absolutely obliterate debian systems way too often. Vs pacman on arxh seems to be exceptionally good at avoiding that. Sure the updated package itself could potentially have a bug or cause a problem but I can't think of any instance where the actual process of updating itself is what eviscerated the system like with apt and dpkg.
And even in the event of an update going catastrophically wrong to the point that the system is inoperable I can simply chroot in use a statically built binary pacman and in a oneliner command reinstall ALL native packages in one go which I've never had not fix a borked system from interrupted update or needing a rollback
You are maybe conflating stability with convenience.
"Why is this stable version of my OS unstable when I update and or install new packages...."
The entire OS falling down randomly on every distribution during normal OS background operations was always an issue or worry, and old Debbie Stables was meant to help make linux feel reliable for production server use, and it has done a decent job at it.
I mean when I can take an Arch Linux installation that I forgot about on my server and is now 8 years out of date and simply manually update the key ring and then be up to date without any issue but every time I've ever tried to do many multiple major version jumps on debian it's died horrifically... I would personally call the latter less stable. Or at least less robust lol.
I genuinely think that because Arch Linux is a rolling distribution that it's update process is just somehow more thorough and less likely to explode.
The last one with debian was a buster to bookworm jump. Midway through something went horrifically wrong and dpkg just bailed out. The only problem was that it somehow during all of that removed the entirety of every binary in /bin. Leaving the system completely inoperable and I attempted to Google for a similar solution as arch. Where i could chroot in and fix it with one simple line. But so far as I was able to find there is no such option with apt/dpkg. If I wanted to attempt to recover the system it would have been an entirely manual Endeavor with a lot of pain.
I would also personally label having the tools to recover from catastrophic failure as being an important part of stability especially when people advocate for things like Debian in a server critical environment and actively discourage the use of things like Arch
If the only thing granting at the title of stability is the lack of update frequency that can simply be recreated on Arch Linux by just not updating frequentlyಠ_ಠ
FWIW I've got a Debian server that hosts most of my sites and primary DNS server, that's been running since Etch (4.0, 2007ish). I've upgraded it over the years, switched from a dedicated server to OpenVZ to KVM, and it's still running today on Bookworm. No major issues with upgrades.
It's definitely not something that will happen 100%. I've also had long standing debian systems that seem to not care. However I've had plenty that, for whatever reason couldn't handle multiple major version hops and just eviscerated themselves, I've not had that with arch personally. You may need to download the latest statically built pacman depending on how old it is but that and a keyring update usually has you covered
They really should have used the word "static" instead of stable. Stable definitely has connotations of functional stability, and unstable of functional instability.
depends on workload. Debian has very old packages and can be insecure but it is a set it and forget it type of thing, it is good when uptime is critical for a server. For desktops, or servers that need better security, but can tolerate a little downtime, rolling releases are good too, if you are enough to update frequently, and you should, since updates usually contain a lot of patched vulrenabilities
Good point! But I recently swapped to Debian 12 from Fedora 41. The latter needing constant updates several times a day. And despite this, it was not stable at all.
Fedora is good on laptops since it has the very newest kernel and thus includes all the latest driver fixes (which are needed for laptops like the Framework where they're actively improving things). On the other hand, it has the very newest kernel and thus includes all the latest bugs.
Can i get some context please? My fedora install wasn't using TPM, i had to manually configure it; i haven't noticed any difference in boot speed with or without TPM encryption
I’ve never had any issues with my Arch install being unpredictable. It has always worked exactly as I expected it to, even though I update it every couple of days.
I've been using Arch since 2014. If I could be arsed, I could write you a looooooooong list of regressions I've had to deal with over the years. For an experienced Linux user, they're usually fairly easy to deal with, but saying you never have to deal with anything is just a lie.
My experience with Arch is basically: it's all very predictable until it isn't and you suddenly find yourself troubleshooting something random like unexplainable bluetooth disconnects caused by a firmware or kernel update.
i started learning about linux 4 months ago. Installed Arch with archinstall pretty easily to a VM, it booted up no problem. But you have to manually install the desktop, if you want a gui (who doesn't lol). But there are many desktops for Arch, the most common ones have pretty good documentation. But if i were you, i'd experiment with some more niche desktop emviroments
Weird. I promptly tried Fedora and switched to Tumbleweed after Fedora kept crashing soon after startup. Hardware configuration probably affects the outcome a lot.
Fedora is security? I mean, don't get me wrong, I love it, it's my daily driver after trying just about every distro under the sun, but I would've figured something like Qubes would stand head and shoulders above it.
I mean, image based (immutable) distros are quite a bit more secure than regular ones, and Fedora Atomic (Silverblue, Bazzite, etc.) is pretty much the only great choice when it comes to those kind of operating systems.
No no, of course they all do. Fedora just comes with SELinux out of the box, probably still a consequence of it once being downstream of Red Hat Enterprise Linux, before IBM came.
there are many distros with even better or similiar security as fedora. The least secure ones are Ubuntu and distros based on it, and Debian stable. Even less secure are any inactive distro. But in general, most distros can be hardened, some more, some less. Like i can harden my Android phone similiar to Arch's level. (yes, i also use custom kernel on my phone, the most secure one for my device)
Debian? Insecure? It's only as insecure as you make it. The default minimal installation from the netinstall CD has barely anything running - not even SSH unless you explicitly select it during installation.
SUSE was a German company a century ago, then it changed hands more than a soap bar in a public restroom and now I have no idea if it’s even a terrestrial company anymore
Check the comment from superkret, basically overengineered, redundant and not very intuitive.
I work in german SW development, so I understand. I would put it like this, german backends are among the best you can find but german frontends are usually complicated and not intuitive...
The main problem is the way YaST2 is (not) integrated with the modern KDE and Gnome settings. Gnome 40 then screwed things up even more for them as every item is now part of the overview, there ain't the classical menu anymore.
If you know where to find things it's great, but right now it indeed feels quite messy with lots of settings hard to find and split in lots of submenus.
More accurate i would describe Fedora is:
Adopting Modern features first(Wayland,pipewire,etc Like there is no x mode in most stable Wayland desktops) and only having free and open source Repos(Rpmfusion can be added but its not official and excludes the Kernel drivers).
I think, a more serious attempt to summarize openSUSE would probably be: Functionality
Debian, Arch, Fedora and such are all weirdly similar in that they focus so much on minimalism. For example, Debian uses dash as the default shell, which breaks TTYs, but possibly squeezes out a tiny bit of performance, so I guess, that's worth it...?
Debian only uses dash for the system shell, and it does improve performance a bit given how many shell scripts run on a typical Linux system. Interactive shell is still set to Bash by default.
Sure, it's still just certainly a choice. It took me multiple years to realize why it's so broken on TTYs, as well as when you run newgrp and probably other places.
I thought Linux just sometimes goes into this buggy state, where you can't make any typos. At one point, I broke my GUI session and had to fix it, typing commands off of my phone screen, without making any typos.
Learning that this is Working As Intended™ just killed me...
These days, I know that you can just run bash (or your shell of choice) to get out of this buggy state, and I still set bash as the system shell when I have to use a Debian-based system, because I just do not care about however much performance it brings in.
i used Tumbleweed with KDE. It is something i can recommend. Not that customizable, but it has tons of features and very stable for a rolling distro. It only breaks if you try to customize stuff too much
Mint: come for the ease of installation and use, stay because it’s just Ubuntu and Debian under the hood so it has tons of support, and the terminal is right there if you need to out so some real shit.
I think mine doesn’t roll off the tongue in quite the same way.
You're confusing security with privacy. While distros you mentioned are great for preventing ISPs and governments from spying on you (privacy), they're not really any better at preventing hackers from exploiting your vulnerable web server than fedora (security).
no, Qubes, Bazzite, Garuda were made with security in mind. Containerization, selinux enforcing, hash checks, address space layout randomization is also built in. These are all more secure than Fedora. Qubes for example, uses vm containers to completly isolate every app, so the system is almost impossible to compromise by malware or hacking. Bazzite uses immutable root file system, much like stock android. it may not along well with unix philosophies, but there isn't really a way for a malicious code to run with elevated privilages or to manipulate system files. Garuda automatically creates snapshota from the system, so if it is compromised, it can be rolled back quickly. Snapshots for external devices or cloud are supported as well. It uses zram compression on swap, this helps avoid data leakages to the disk, so makes sure that after a reboot, every session quits, since data from ram can't leak on the disk. it also uses firejail and chaotic aur sandboxing. There is a smaller support for secure boot too. So these are all highly secure operating systems. And to some degree, privacy and security overlap each other.
Are *buntu flavors risky for my workstation? Should I be considering Fedora?
Why would they be risky? O.o They're the preferred workstation setup at my place because Ubuntu is spread enough that it can be relied upon to be the distro admins have the most experience with (which is a self-perpetuating thing, I am aware).
If you're cannot parse the configuration file, you don't update. It is perfectly, 100% stable, about 60% of the time (when I change my config file without an error).
nixOS appeals to niche audiences who like to brag about it. I think it is not a good idea to base everything on config files, since there is a lot of room for user error
You've literally described Linux. Wasn't even a week ago we had a circle jerk on here about how great it is that everything on Linux is an ascii file (technically inaccurate on most distros) rather than a nasty nasty database where the most you can fuck up at a time is a single entry vs breaking the schema of an entire file.