Are there any things in Linux that need to be started over from scratch?
I'm curious how software can be created and evolve over time. I'm afraid that at some point, we'll realize there are issues with the software we're using that can only be remedied by massive changes or a complete rewrite.
Are there any instances of this happening? Where something is designed with a flaw that doesn't get realized until much later, necessitating scrapping the whole thing and starting from scratch?
And then ALSA to all those barely functional audio daemons to PulseAudio, and then again to PipeWire. That sure one took a few tries to figure out right.
And the strangest thing about that is that neither PulseAudio nor Pipewire are replacing anything. ALSA and PulseAudio are still there while I handle my audio through Pipewire.
there are issues with the software we’re using that can only be remedied by massive changes or a complete rewrite.
I think this was the main reason for the Wayland project. So many issues with Xorg that it made more sense to start over, instead of trying to fix it in Xorg.
according to kagiGPT..
~~i have determined that wayland is the successor and technically minimal:
*Yes, it is possible to run simple GUI programs without a full desktop environment or window manager. According to the information in the memory:
You can run GUI programs with just an X server and the necessary libraries (such as QT or GTK), without needing a window manager or desktop environment installed. [1][2]
The X server handles the basic graphical functionality, like placing windows and handling events, while the window manager is responsible for managing the appearance and behavior of windows. [3][4]
Some users prefer this approach to avoid running a full desktop environment when they only need to launch a few GUI applications. [5][6]
However, the practical experience may not be as smooth as having a full desktop environment, as you may need to manually configure the environment for each GUI program. [7][8]*~~
Wayland is not a display server like X11, but rather a protocol that describes how applications communicate with a compositor directly. [1]
Display servers using the Wayland protocol are called compositors, as they combine the roles of the X window manager, compositing manager, and display server. [2]
A Wayland compositor combines the roles of the X window manager, compositing manager, and display server. Most major desktops support Wayland compositors. [3]
And as I've understood and read about it, Wayland had been a near 10 years mess that ended up with a product as bad or perhaps worse than xorg.
Not trying to rain on either parade, but x is like the Hubble telescope if we added new upgrades to it every 2 months. Its way past its end of life, doing things it was never designed for.
I do not want to fight and say you misunderstood. Let’s just say you have been very influenced by one perspective.
Wayland has taken a while to fully flesh out. Part of that has been delay by the original designers not wanting to compromise their vision. Most of it is just the time it takes to replace something mature ( X11 is 40 years old ). A lot of what feels like Wayland problems actually stem from applications not migrating yet.
While there are things yet to do, the design of Wayland is proving itself to be better fundamentally. There are already things Wayland can do that X11 likely never will ( like HDR ). Wayland is significantly more secure.
At this point, Wayland is either good enough or even superior for many people. It does not yet work perfectly for NVIDIA users which has more to do with NVIDIA’s choices than Wayland. Thankfully, it seems the biggest issues have been addressed and will come together around May.
The desktop environments and toolkits used in the most popular distros default to Wayland anlready and will be Wayland only soon. Pretty much all the second tier desktop environments have plans to get to Wayland.
We will exit 2024 with almost all distros using Wayland and the majority of users enjoying Wayland without issue.
X11 is going to be around for a long time but, on Linux, almost nobody will run it directly by 2026.
I've been using Wayland on plasma 5 for a year or so now, and it looks like the recent Nvidia driver has merged, so it should be getting even better any minute now.
I've used it for streaming on Linux with pipewire, overall no complaints.
Wayland is the default for GNOME and KDE now, meaning before long it will become the default for the majority of all Linux users. And in addition, Xfce, Cinnamon and LXQt are also going to support it.
Strange. I’m not exactly keeping track. But isn’t the current going in just the opposite direction? Seems like tons of utilities are being rewritten in Rust to avoid memory safety bugs
The more the code is used, the faster it ought to be. A function for an OS kernel shouldn't be written in Python, but a calculator doesn't need to be written in assembly, that kind of thing.
I can't really speak for Rust myself but to explain the comment, the performance gains of a language closer to assembly can be worth the headache of dealing with unsafe and harder to debug languages.
Linux, for instance, uses some assembly for the parts of it that need to be blazing fast. Confirming assembly code as bug-free, no leaks, all that, is just worth the performance sometimes.
But yeah I dunno in what cases rust is faster than C/C++.
Agree, call me unreasonable or whatever but I just don't like Rust nor the community behind it. Stop trying to reinvent the wheel! Rust makes everything complicated.
Starting anything from scratch is a huge risk these days. At best you'll have something like the python 2 -> 3 rewrite overhaul (leaving scraps of legacy code all over the place), at worst you'll have something like gnome/kde (where the community schisms rather than adopting a new standard). I would say that most of the time, there are only two ways to get a new standard to reach mass adoption.
Retrofit everything. Extend old APIs where possible. Build your new layer on top of https, or javascript, or ascii, or something else that already has widespread adoption. Make a clear upgrade path for old users, but maintain compatibility for as long as possible.
Buy 99% of the market and declare yourself king (cough cough chromium).
In a good way. Using a non-verified bytes type for strings was such a giant source of bugs. Text is complicated and pretending it isn't won't get you far.
The entire thing. It needs to be completely rewritten in rust, complete with unit tests and Miri in CI, and converted to a high performance microkernel. Everything evolves into a crab /s
Maybe not exaclly Linux, sorry for that, but it was first thing that get to my mind.
Web browsers really should be rewritten, be more modular and easier to modify. Web was supposed to be bulletproof and work even if some features are not present, but all websites are now based on assumptions all browsers have 99% of Chromium features implemented and won't work in any browser written from scratch now.
The same guys who create Chrome have stuffed the web standards with needlessly bloated fluff that makes it nearly impossible for anyone else to implement it. If alternative browsers have to be a thing again, we need a new standard, or at least the current standard with significantly large portions removed.
Most of the standards themselves aren't the problem, we just shouldn't have to rely so badly on them that a site immediately is dead if a small item is not available
Agreed. I mean, metadata should be protocol stuff, not document stuff. And rendering (font size etc) should be user side, not developer side. Browser should be modular, not a monolith. Creating a webpage should be easy again.
BoringSSL is not a drop-in replacement for openssl though:
BoringSSL is a fork of OpenSSL that is designed to meet Google's needs.
Although BoringSSL is an open source project, it is not intended for general use, as OpenSSL is. We don't recommend that third parties depend upon it. Doing so is likely to be frustrating because there are no guarantees of API or ABI stability.
We haven't rewritten the firewall code lately, right? checks Oh, it looks like we have. Now it's nftables.
I learned ipfirewall, then ipchains, then iptables came along, and I was like, oh hell no, not again. At that point I found software to set up the firewall for me.
It's actually a classic programmer move to start over again. I've read the book "Clean Code" and it talks about a little bit.
Appereantly it would not be the first time that the new start turns into the same mess as the old codebase it's supposed to replace. While starting over can be tempting, refactoring is in my opinion better.
If you refactor a lot, you start thinking the same way about the new code you write. So any new code you write will probably be better and you'll be cleaning up the old code too. If you know you have to clean up the mess anyways, better do it right the first time
....
However it is not hard to imagine that some programming languages simply get too old and the application has to be rewritten in a new language to ensure continuity. So I think that happens sometimes.
Yeah, this was something I recognized about myself in the first few years out of school. My brain always wanted to say "all of this is a mess, let's just delete it all and start from scratch" as though that was some kind of bold/smart move.
But I now understand that it's the mark of a talented engineer to see where we are as point A, where we want to be as point B, and be able to navigate from A to B before some deadline (and maybe you have points/deadlines C, D, E, etc.). The person who has that vision is who you want in charge.
Chesterton's Fence is the relevant analogy: "you should never destroy a fence until you understand why it's there in the first place."
I'd counter that with monolithic, legacy apps without any testing trying to refactor can be a real pain.
I much prefer starting from scratch, while trying to avoid past mistakes and still maintaining the old app until new up is ready. Then management starts managing and new app becomes old app. Rinse and repeat.
GUI toolkits like Qt and Gtk. I can't tell you how to do it better, but something is definitely wrong with the standard class hierarchy framework model these things adhere to. Someday someone will figure out a better way to write GUIs (or maybe that already exists and I'm unaware) and that new approach will take over eventually, and all the GUI toolkits will have to be scrapped or rewritten completely.
Idk man, I've used a lot of UI toolkits, and I don't really see anything wrong with GTK (though they do basically rewrite it from scratch every few years it seems...)
The only thing that comes to mind is the React-ish world of UI systems, where model-view-controller patterns are more obvious to use. I.e. a concept of state where the UI automatically re-renders based on the data backing it
But generally, GTK is a joy, and imo the world of HTML has long been trying to catch up to it. It's only kinda recently that we got flexbox, and that was always how GTK layouts were. The tooling, design guidelines, and visual editors have been great for a long time
Which - in my considered opinion - makes them so much worse.
Is it because writing native UI on all current systems I'm aware of is still worse than in the times of NeXTStep with Interface Builder, Objective C, and their class libraries?
And/or is it because it allows (perceived) lower-cost "web developers" to be tasked with "native" client UI?
About 20 xdg-open alternatives (which is, btw, just a wrapper around gnome-open, exo-open, etc.)
I use handlr-regex, is it bad? It was the only thing I found that I could use to open certain links on certain web applications (like android does), using exo-open all links just opened on the web browser instead.
Be careful what you wish for. I’ve been part of some rewrites that turned out worse than the original in every way. Not even code quality was improved.
Funnily enough the current one is actually the one where we’ve made the biggest delta and it’s been worthwhile in every way. When I joined the oldest part of the platform was 90s .net and MSSQL. This summer we’re turning the last bits off.
I realize that's not exactly what you asked for but
Pipewire had been incredibly stable for me. Difference between the absolute nightmare of using BT devices with alsa and super smooth experience in pipewire is night and day.
I would say the whole set of C based assumptions underlying most modern software, specifically errors being just an integer constant that is translated into a text so it has no details about the operation tried (who tried to do what to which object and why did that fail).
You have stderr to throw errors into. And the constants are just error codes, like HTTP error codes. Without it how computer would know if the program executed correctly.
You throw an exception like a gentleman. But C doesn't support them. So you need to abuse the return type to also indicate "success" as well as a potential value the caller wanted.
You mean 0 indicating success and any other value indicating some arbitrary meaning? I don't see any problem with that.
Passing around extra error handling info for the worst case isn't free, and the worst case doesn't happen 99.999% of the time. No reason to spend extra cycles and memory hurting performance just to make debugging easier. That's what debug/instrumented builds are for.
Passing around extra error handling info for the worst case isn’t free, and the worst case doesn’t happen 99.999% of the time.
The case "I want to know why this error happened" is basically 100% of the time when an error actually happens.
And the case of "Permission denied" or similar useless nonsense without any details costing me hours of my life in debugging time that wouldn't be necessary if it just told me permission for who to do what to which object happens quite regularly.
It does very much have the concept of objects as in subject, verb, object of operations implemented in assembly.
As in who (user foo) tried to do what (open/read/write/delete/...) to which object (e.g. which socket, which file, which Linux namespace, which memory mapping,...).
There are many instances like that. Systemd vs system V init, x vs Wayland, ed vs vim, Tex vs latex vs lyx vs context, OpenOffice vs libreoffice.
Usually someone identifies a problem or a new way of doing things… then a lot of people adapt and some people don’t. Sometimes the new improvement is worse, sometimes it inspires a revival of the old system for the better…
It’s almost never catastrophic for anyone involved.
Alt text: Thomas Jefferson thought that every law and every constitution should be torn down and rewritten from scratch every nineteen years--which means X is overdue.
The goal of the zig language is to allow people to write optimal software in a simple and explicit language.
It's advantage over c is that they improved some features to make things easier to read and write. For example, arrays have a length and don't decay to pointers, defer, no preprocessor macros, no makefile, first class testing support, first class error handling, type inference, large standard library. I have found zig far easier to learn than c, (dispite the fact that zig is still evolving and there are less learning resources than c)
It's advantage over rust is that it's simpler. Ive never played around with rust, but people have said that the language is more complex than zig. Here's an article the zig people wrote about this: https://ziglang.org/learn/why_zig_rust_d_cpp/
We need a networked file system with real authentication and network encryption that's trivial to set up and that is performant and that preserves unix-ness of the filesystem, meaning nothing weird like smb, so you can just use it as you would a local filesystem.
Mine is the priorisation of devices. If someone turns on the flatshare BT box and I'm listening to Death Metal over my headphones, suddenly everyone except me is listening to Death Metal.
Not to mention bluez aggressive conne ts to devices. It would be nice if my laptop in the other room didn't interrupt my phones connection to my earbuds.
Then again, we also have wired for a reason. Hate all you want but it works and is predicable
Not connecting automatically. Bad quality. Some glitchy artifacts. It gets horrible The only work around I've found is stupid but running apt reinstall --purge bluez gnome-bluetooth and it works fine. So annoying but I have to do this almost every day.
It's been a while (few years actually) since I even tried, but bluetooth headsets just won't play nicely. You either get the audio quality from a bottom of the barrel or somewhat decent quality without microphone. And the different protocol/whatever isn't selected automatically, headset randomly disconnects and nothing really works like it does with my cellphone/windows-machines.
YMMV, but that's been my experience with my headsets. I've understood that there's some propietary stuff going on with audio codecs, but it's just so frustrating.
My most recent issue with Bluez is that it's been very inconsistent about letting me disable auto-switching to HSP/HFP (headset mode) when joining any sort of call.
It's working now, but it feels like every few months I need to try a different solution.
I installed a fairly small rust program recently (post-XZ drama), and was a bit concerned when it pulled in literally hundreds of crates as dependencies. And I wasn't planning on evaluating all of them to see if they were secure/trustworthy - who knows if one of them had a backdoor like XZ? Rust can claim to be as secure as Fort Xnox, but it means nothing if you have hundreds of randoms constantly going in and out of the building, and we don't know who's doing the auditing and holding them accountable.
In reality this happens all the time. When you develop a codebase it's based on your understanding of the problem. Over time you gain new insights into the environment in which that problem exists and you reach a point where you are bending over backwards to implement a fix when you decide to start again.
It's tricky because if you start too early with the rewrite, you don't have a full understanding, start too late and you don't have enough arms and legs to satisfy the customers who are wanting bugs fixed in the current system while you are building the next one.
.. or you hire a new person who knows everything and wants to rewrite it all in BASIC, or some other random language ..
Not really software but, personally I think the FHS could do with replacing. It feels like its got a lot of historical baggage tacked on that it could really do with shedding.
Are there any things in Linux that need to be started over from scratch?
Yes, Linux itself! (ie the kernel). It would've been awesome if Linux were a microkernel, there's so many advantages to it like security, modularity and resilience.
Wayland is incomplete and unfinished, not broken and obsolete and hopelessly bad design. PulseAudio was bad design. Wayland is very well designed, just, most things haven't been ported for it yet and some design by committee hell, but even that one is kind of a necessary tradeoff so that Wayland actually lasts a long time.
What people see: lol Firefox can't even restore its windows to the right monitors
What the Wayland devs see: so how can we make it so Firefox will also restore its windows correctly on a possible future VR headset environment where the windows maintain their XYZ and rotation placement correctly so the YouTube window you left above the stove goes back above the stove.
The Wayland migration is painful because they took the occasion to redo everything from scratch without the baggage of what traditional X11 apps could do, so there is less likely a need for a Wayland successor when new display tech arrives and also not a single display server that's so big its quirks are now features developers relied on for 20 years and essentially part of the standard.
There's nothing so far that can't be done in Wayland for technical implementation reasons. It's all because some of the protocols aren't ready yet, or not implemented yet.
X11 is 40 years old. I'd say it's been rather successful in the "won't need to be replaced for some time" category. Some credit where due.
There's nothing so far that can't be done in Wayland for technical implementation reasons. It's all because some of the protocols aren't ready yet, or not implemented yet.
I mean .. It doesn't matter why it can't be done. Just that it can't be done.
There’s nothing so far that can’t be done in Wayland for technical implementation reasons.
Then make it fully X11 backwards compatible. Make Wayland X12. C'mon, they already admitted NVidia was right and are switching the sync and working to finally support the card they've been busting a hate boner over the driver simply because they're bigots against the licensing. Time to admit breaking the world was a mistake, too.
Can't even update Firefox in place. Have to download a new copy, run it from the downloads folder, make a desktop shortcut myself, which doesn't have the Firefox icon.
Can't remember if that was mint or Ubuntu I was fiddling with, but it's not exactly user friendly.
Seriously, I'm not a heavy software developer that partakes in projects of that scale nor complexity but just seeing it from the outside makes me hurt. All these protocols left-right and center, surely just an actual program would be cleaner? Like they just rewrite X from scratch implementing and supporting all modern technology and using a monolithic model.
Then small projects could still survive since making a compositor would almost be trivial, no need to rewrite Wayland from scratch cause we got "Waykit" (fictional name I just thought of for this X rewrite), just import that into your project and use the API.
That would work if the only problem they wanted to solve was an outdated tech stack for X. But there are other problems that wayland addresses too, like: how to scale multiple monitors nicely, is it a good idea to give all other apps the keystrokes that you do in the one in focus (and probably a lot more)
Wayland and X are very very different. The X protocol is a protocol that was designed for computer terminals that connected into a mainframe. It was never designed for advanced graphics and the result is that we have just built up a entire system that balances on a shoe box.
Wayland is a protocol that allows your desktop to talk to the display without a heavy server. The result is better battery life, simplified inputs, lower latency, better performance and so on
I agree in the sense that Wayland adoption would have definitely gone quicker if that was the case, however in the long run this approach does make sense (otherwise you will eventually just run into the same sorts of issues X11 had).
Btw what you're describing is not that far off from the normal way of using Wayland protocols in development - you use wayland-scanner to generate C source files from the protocols, and you include those to actually "use" the protocols in your programs. Admittedly all my Wayland development experience has been "client-side", so I really don't know how complex it is to build a compositor, but dwl (minimalist Wayland compositor) is only around 3k lines of code (only slightly more than dwm (minimalist X wm)).
It is complex to build a Wayland compositor. When none existed, you had to build your own. So it took quite a while for even big projects like GNOME and KDE to work through it.
At this stage, there are already options to build a compositor using a library where most of the hard stuff is done for you.
There will be more. It will not be long before creating Wayland compositors is easy, even for small projects.
As more and more compositors appear, it will also become more common just to fork an existing compositor and innovate on top.
One of the longer term benefits of the Wayland approach is that the truly ambitious projects have the freedom to take on more of the stack and innovate more completely. There will almost certainly be more innovation under Wayland.
All of this ecosystem stuff takes time. We are getting there. Wayland will be the daily desktop for pretty much all Linux users ( by percentage ) by the end of this year. In terms of new and exciting stuff, things should be getting pretty interesting in the next two years.
Needs to be replaced already. They're having to change to explicit sync, which they should have done from the start. So throw it out, start over, make X12.
I admit I haven't done a great deal of research, so maybe there are problems, but I've found that lzip tends to do better at compression than xz/lzma and, to paraphrase its manual, it's designed to be a drop-in replacement for gzip and bzip2. It's been around since at least 2009 according to the copyright messages.
That said, xz is going to receive a lot of scrutiny from now on, so maybe it doesn't need replacing. Likewise, anything else that allows random binary blobs into the source repository is going to have the same sort of scrutiny. Is that data really random? Can it be generated by non-obfuscated plain text source code instead? etc. etc.
I'm tempted to say systemd-ecosystem. Sure, it has it's advantages and it's the standard way of doing things now, but I still don't like it. Journalctl is a sad and poor replacement from standard log files, it has a ton of different stuff which used to be their separate own little things (resolved, journald, crontab...) making it pretty monolithic thing and at least for me it fixed a problem which wasn't there.
Snapcraft (and flatpack to some extent) also attempts to fix a non-existing problem and at least for me they have caused more issues than any benefits.
Everyone doesn't. Just a handful of loud idiots who mostly don't work with init systems. It is objectively better. There are some things you could criticise, but any blanket statement like that is just category a.