To be fair, we only know of this one. There may well be other open source backdoors floating around with no detection. Was heartbleed really an accident?
True. And the "given enough eyeballs, all bugs are shallow" is a neat sounding thing from the past when the amount of code lines was not as much as now. Sometimes it is scary to see how long a vulnerability in the Linux kernel had been there for years, "waiting" to be exploited.
Still far better than a proprietary kernel made by a tech corp, carried hardly changed from release to release, even fewer people maintain, and if they do they might well be adding a backdoor themselves for their government agency friends.
Yeah he didn't find the right unmaintained project. There are many many many cs undergrads starting projects that will become unmaintained pretty soon.
I've gotten back into tinkering on a little Rust game project, it has about a dozen dependencies on various math and gamedev libraries. When I go to build (just like with npm in my JavaScript projects) cargo needs to download and build just over 200 projects. 3 of them build and run "install scripts" which are just also rust programs. I know this because my anti-virus flagged each of them and I had to allow them through so my little roguelike would build.
Like, what are we even suppose to tell "normal people" about security? "Yeah, don't download files from people you don't trust and never run executables from the web. How do I install this programming utility? Blindly run code from over 300 people and hope none of them wanted to sneak something malicious in there."
I don't want to go back to the days of hand chisling every routine into bare silicon by hand, but i feel l like there must be a better system we just haven't devised yet.
Researchers have found a malicious backdoor in a compression tool that made its way into widely used Linux distributions, including those from Red Hat and Debian.
Which is why you shouldn't do that. Dependency nightmare is a real problem many developers face. More to the point they impose it on you as well if you are by any reason forced to use their software. Well established libraries are gateway to this. People are getting out of their way to complicate lives to themselves and massive amount of others just so they could avoid writing a function or two. Biggest absurdity I like to point out to people is the existence of is-number NPM package, which does that. It has 2300 dependent projects on it!!! Manifest file for said package is bigger than the source. And the author had the hubris to "release it under MIT". How can you claim copyright on num - num === 0?
On all the projects I manage I don't allow new dependencies unless they are absolutely needed and can't be easily re-implemented. And even then they'd have to be already in the Debian respository since it's a good and easy way to ensure quick fixes and patching should it be needed. Sometimes alternative to what we wanted to use already is in repo, then we implement using different approach. We only have few Python modules that are not available in repo.
Managing project complexity is a hard thing and dependencies especially have a nasty habit of creeping up. I might be too rigid or old-school or whatever you want to call it, but hey at least we didn't get our SSH keys stolen by NPM package.
I do not get why people don't learn from Node/NPM: If your language has no exhaustive standard library the community ends up reinventing the wheel and each real world program has hundreds of dependencies (or thousands).
Instead of throwing new features at Rust the maintainers should focus on growing a trusted standard library and improve tooling, but that is less fun I assume.
It's a really wicked problem to be sure. There is work underway in a bunch of places around different approaches to this; take a look at SBoM (software bill-of-materials) and reproducible builds. Doesn't totally address the trust issue (the malicious xz releases had good gpg signatures from a trusted contributor), but makes it easier to spot binary tampering.
Shameless plug to the OSS Review Toolkit project (https://oss-review-toolkit.org/ort/) which analyze your package manager, build a dependency tree and generates a SBOM for you. It can also check for vulnerabilitiea with the help of VulnerableCode.
Do you really need to download new versions at every build? I thought it was common practice to use the oldest safe version of a dependency that offers the functionality you want. That way your project can run on less up to date systems.
Most softwares do not include detailed security fixes in the change log for people to check; and many of these security fixes are in dependencies, so it is unlikely to be documented by the software available to the end user.
So most of the time, the safest "oldest safe" version is just the latest version.
to tell “normal people” about security? “Yeah, don’t download files from people you don’t trust and never run executables from the web. How do I install this programming utility? Blindly run code from over 300 people and hope none of them wanted to sneak something malicious in there.”
You're starting to come to an interesting realization about the state of 'modern' programming and the risks we saw coming 20 years ago.
I don’t want to go back to the days [...]
You don't need to trade convenience for safety, but having worked in OS Security I would recommend it.
Pulling in random stuff you haven't validated should feel really uncomfortable as a professional.
Getting noticed because of a 300ms delay at startup by a person that is not a security researcher or even a programmer after doing all that would be depressing honestly.
The problem I have with this meme post is that it gives a false sense of security, when it should not.
Open or closed source, human beings have to be very diligent and truly spend the time reviewing others code, even when their project leads are pressuring them to work faster and cut corners.
This situation was a textbook example of this does not always happen. Granted, duplicity was involved, but still.
In many ways, distributed open source software gives more social attack surfaces, because the system itself is designed to be distributed where a lot of people each handle a different responsibility. Almost every open source license includes an explicit disclaimer of a warranty, with some language that says something like this:
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
Well, bring together enough dependencies, and you'll see that certain widely distributed software packages depend on the trust of dozens, if not hundreds, of independent maintainers.
This particular xz vulnerability seems to have affected systemd and sshd, using what was a socially engineered attack on a weak point in the entire dependency chain. And this particular type of social engineering (maintainer burnout, looking for a volunteer to take over) seems to fit more directly into open source culture than closed source/corporate development culture.
In the closed source world, there might be fewer places to probe for a weak link (socially or technically), which makes certain types of attacks more difficult. In other words, it might truly be the case that closed source software is less vulnerable to certain types of attacks, even if detection/audit/mitigation of those types of attacks is harder for closed source.
It's a tradeoff, not a free lunch. I still generally trust open source stuff more, but let's not pretend it's literally better in every way.
It’s a tradeoff, not a free lunch. I still generally trust open source stuff more, but let’s not pretend it’s literally better in every way.
Totally agree.
All the push back I'm getting is from people who seem to be worried about open source somehow losing a positive talking point, when comparing it to close source systems, which is not my intention (the loss of the talking point). (I personally use Fedora/KDE.)
But sticking our heads in the sand doesn't help things, when issues arise, we should acknowledge them and correct them.
using what was a socially engineered attack on a weak point in the entire dependency chain.
An example of what you may be speaking about, indirectly. We can only hope that maintainers do due diligence, but it is volunteer work.
There are two big problems with the point that you're trying to make:
There are many open source projects being run by organizations with as much (often stronger) governance over commit access as a private corporation would have over its closed source code base. The most widely used projects tend to fall under this category, like Linux, React, Angular, Go, JavaScript, and innumerable others. Governance models for a project are a very reasonable thing to consider when deciding whether to use a dependency for your application or library. There's a fair argument to be made that the governance model of this xz project should have been flagged sooner, and hopefully this incident will help stir broader awareness for that. But unlike a closed source code base, you can actually know the governance model and commit access model of open source software. When it comes to closed source software you don't know anything about the company's hiring practices, background checks, what access they might provide to outsourced agents from other countries who may be compromised, etc.
You're assuming that 100% of the source code used in a closed source project was developed by that company and according to the company's governance model, which you assume is a good one. In reality BSD/MIT licensed (and illegally GPL licensed) open source software is being shoved into closed source code bases all the time. The difference with closed source software is that you have no way of knowing that this is the case. For all you know some intern already shoved a compromised xz into some closed source software that you're using, and since that intern is gone now it will be years before anyone in the company notices that their software has a well known backdoor sitting in it.
I feel like that's really crappy non-vegan mental gymnastics. I think veganism is morally superior, but I really want to show mine off, just because I'm offended by how stupid all these are-the fact I know they're real makes me more ashamed of eating that yogurt earlier than any amount of chatt slavery or butchery ever will.
Yeah they got lucky. But shows how susceptible systems are. Really makes you wonder how many systems are infected with similar - this wouldn't be the first back door that's live in Linux systems.
Depends, for example Debian unattended-upgrade caused system restarts after many updates that was extremely inconvenient for me because I have a more manual bringup process. I had restarts turned off in its settings and it still restarted.
I uninstalled it and have not one single unwanted restart since then, so manual upgrades it is.
because AbstractTransactionAwarePersistenceManagerFactoryProxyBean needs to spin up 32 electron instances (one for each thread) to ensure scalability and robustness and then WelcomeSolutionStrategyExecutor needs to parse 300 megabytes of javascript to facilitate rendering the "welcome" screen
If your security relies on hidden information then it's at risk of being broken at any time by someone who will find the information in some way. Open source security is so much stronger because it works independently of system knowledge. See all the open source cryptography that secures the web for example.
Open source poc and fix increases awareness of issues and helps everyone to make progress. You will also get much more eyes to verify your analysis and fix, as well as people checking if there could other consequences in other systems. Some security specialists are probably going to create techniques to detect this kind of sophisticated attack in the future.
This doesn't happen with closed source.
If some system company/administrator is too lazy to update, the fault is on them, not on the person who made all the information available for your to understand and fix the issue.
Even in open source, responsible disclosure is generally possible.
See, e.g. Spectre/Meltdown, where they worked privately with high level Linux Kernel developers for months to have patches ready on all supported branches before they made the vulnerability public
this is why we invented responsible disclosure, which is a thing companies like apple do even. Although in this case, this was the very beginning of what seemed to be a rollout, so if it does effect systems, it's not very many. And if they are affected. The solution is pretty obvious.
In this case it seems the backdoor is only usable with someone who has the correct key. Seeing and reverting something fishy is in some cases, like this easier than finding an exploit. It takes a lot of time in this case to figure out what goes on.
Fixing a bug never automatically give an easy to use exploit for script kiddies